Skip to content

Querying basis on filter and getting metrics we should'nt get #5709

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Marcusfve opened this issue Dec 13, 2023 · 6 comments
Closed

Querying basis on filter and getting metrics we should'nt get #5709

Marcusfve opened this issue Dec 13, 2023 · 6 comments

Comments

@Marcusfve
Copy link

Marcusfve commented Dec 13, 2023

We had maintenance yesterday and suddenly we are seeing in our system, metrics which we are querying basis on filter, result is having data of other metrics & filters too. Not sure why query is not fetching correct data. We are querying data which is in Ingester only and we are running cortex 1.16 version.
Problem started after maintenance was done and machines all the machines were restarted.
One hour after the maintenance i got an alert CortexIngesterTSDBHeadCompactionFailed

This is what we got when we queried the data. Some metrics shouldnt be there.
image

Im thinking maybe ingesters were affected by the force shutdown..

Right now i was able to fix this issue by cleaning ingester volume but i want to know why this happened in the first place and what i can do to prevent this issue.

If you need any more information then please, be free to ask.

@yeya24
Copy link
Contributor

yeya24 commented Dec 13, 2023

It would be great to provide steps to reproduce the issue. Otherwise it would be hard for us to figure it out.

What was the time range you are querying? And it should only query ingester, not store gateway?

@friedrichg
Copy link
Member

@Marcusfve also include the config if you can. hiding out secrets, of course.

@Marcusfve
Copy link
Author

Marcusfve commented Dec 14, 2023

I rly dont have the steps to reproduce the issue...because it randomly happened and have no idea how to reproduce it. The time range was first 13 hours (that is the time range it sends queries to ingesters).

We are running cortex in docker swarm cluster and the config is following:

version: '3.9'

services:

  distributor:
    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "5"
    image: cortex_cortex_v1.16
    networks:
      - cortex-network
    ports:
      - "9009:9009"
      - "9095:9095"
    volumes:
      - /srv/certs:/etc/cortex/certs
    command:
      -server.http-listen-port=9009
      -server.http-tls-cert-path=/etc/cortex/certs/server.cer
      -server.http-tls-key-path=/etc/cortex/certs/server.key
      -target=distributor
      -distributor.ring.store=consul
      -distributor.ring.consul.hostname=consul-client:8500
      -distributor.ingestion-rate-limit-strategy=global
      -consul.hostname=consul-client:8500
      -distributor.extend-writes=true
      -auth.enabled=true
      -distributor.health-check-ingesters=true
      -distributor.ingestion-rate-limit=500000
      -distributor.replication-factor=3
      -distributor.shard-by-all-labels=true
      -ring.heartbeat-timeout=10m
      -validation.reject-old-samples=true
      -validation.reject-old-samples.max-age=12h
      -distributor.ring.prefix=collectors/
      -blocks-storage.backend=s3
      -blocks-storage.s3.bucket-name=cortex
      -blocks-storage.s3.endpoint=s3
      -blocks-storage.s3.access-key-id=access
      -blocks-storage.s3.secret-access-key=password
      -store.engine=blocks
      -ring.prefix=collectors/
      -blocks-storage.tsdb.block-ranges-period=2h
      -blocks-storage.tsdb.dir=/data/tsdb
      -ring.store=consul
      -distributor.ha-tracker.enable=true
      -distributor.ha-tracker.enable-for-all-users=true
      -distributor.ha-tracker.consul.hostname=consul-client:8500
      -tenant-federation.enabled=true
      -validation.max-length-label-value=4096
      -distributor.ingestion-burst-size=2000000
    deploy:
      mode: replicated
      replicas: 7


  ingester:
    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "5"
    image: cortex_cortex_v1.16
    networks:
      - cortex-network
    ports:
      - "9008:9009"
      - "9098:9095"
    volumes:
      - /srv/certs:/etc/cortex/certs
      - ingester-data:/data
    command:
      -server.http-listen-port=9009
      -server.http-tls-cert-path=/etc/cortex/certs/server.cer
      -server.http-tls-key-path=/etc/cortex/certs/server.key
      -target=ingester
      -ring.store=consul
      -consul.hostname=consul-client:8500
      -distributor.replication-factor=3
      -blocks-storage.backend=s3
      -auth.enabled=true
      -blocks-storage.s3.bucket-name=cortex
      -blocks-storage.s3.endpoint=s3
      -blocks-storage.s3.access-key-id=access
      -blocks-storage.s3.secret-access-key=password
      -store.engine=blocks
      -ring.prefix=collectors/
      -blocks-storage.tsdb.block-ranges-period=2h
      -blocks-storage.tsdb.dir=/data/tsdb
      -blocks-storage.tsdb.retention-period=96h
      -blocks-storage.tsdb.ship-interval=1m
      -distributor.health-check-ingesters=true
      -distributor.shard-by-all-labels=true
      -ingester.heartbeat-period=15s
      -ingester.join-after=0s
      -ingester.max-global-series-per-metric=950000
      -ingester.max-series-per-metric=0
      -ingester.num-tokens=512
      -ingester.unregister-on-shutdown=true
      -ring.heartbeat-timeout=10m
      -tenant-federation.enabled=true
      -validation.max-length-label-value=4096
      -ingester.max-global-series-per-user=9500000
      -ingester.max-series-per-user=0
    deploy:
      mode: replicated
      replicas: 7


  querier:
    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "5"
    image: cortex_cortex_v1.16
    networks:
      - cortex-network
    ports:
      - "9003:9009"
      - "9093:9095"
    volumes:
      - /srv/certs:/etc/cortex/certs
    command:
      -server.http-listen-port=9009
      -target=querier
      -blocks-storage.backend=s3
      -server.http-tls-cert-path=/etc/cortex/certs/server.cer
      -server.http-tls-key-path=/etc/cortex/certs/server.key
      -blocks-storage.s3.bucket-name=cortex
      -auth.enabled=true
      -blocks-storage.s3.endpoint=s3
      -blocks-storage.s3.access-key-id=access
      -blocks-storage.s3.secret-access-key=password
      -ring.store=consul
      -blocks-storage.bucket-store.sync-dir=/data/tsdb
      -consul.hostname=consul-client:8500
      -store-gateway.sharding-ring.store=consul
      -store-gateway.sharding-ring.consul.hostname=consul-client:8500
      -store-gateway.sharding-ring.replication-factor=3
      -store-gateway.sharding-enabled=true
      -distributor.replication-factor=3
      -distributor.shard-by-all-labels=true
      -distributor.health-check-ingesters=true
      -store.engine=blocks
      -blocks-storage.bucket-store.ignore-deletion-marks-delay=1h
      -querier.max-concurrent=20
      -querier.query-ingesters-within=13h
      -querier.query-store-after=12h
      -querier.worker-parallelism=4
      -store.max-query-length=4405h
      -blocks-storage.bucket-store.bucket-index.enabled=true
      -tenant-federation.enabled=true
    deploy:
      mode: replicated
      replicas: 3


  store-gateway:
    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "5"
    image: cortex_cortex_v1.16
    networks:
      - cortex-network
    ports:
      - "9004:9009"
      - "9094:9095"
    volumes:
      - /srv/certs:/etc/cortex/certs
      - store-gateway-data:/data
    command:
      -server.http-listen-port=9009
      -target=store-gateway
      -store-gateway.sharding-enabled=true
      -server.http-tls-cert-path=/etc/cortex/certs/server.cer
      -server.http-tls-key-path=/etc/cortex/certs/server.key
      -store-gateway.sharding-ring.store=consul
      -store-gateway.sharding-ring.consul.hostname=consul-client:8500
      -store-gateway.sharding-ring.replication-factor=3
      -store-gateway.sharding-strategy=default
      -blocks-storage.backend=s3
      -auth.enabled=true
      -blocks-storage.s3.bucket-name=cortex
      -blocks-storage.s3.endpoint=s3
      -blocks-storage.s3.access-key-id=access
      -blocks-storage.s3.secret-access-key=password
      -store-gateway.sharding-ring.wait-stability-min-duration=0
      -blocks-storage.bucket-store.sync-dir=/data/tsdb
      -store.engine=blocks
      -server.grpc.keepalive.ping-without-stream-allowed=true
      -server.grpc.keepalive.min-time-between-pings=10m
      -blocks-storage.bucket-store.ignore-deletion-marks-delay=1h
      -blocks-storage.bucket-store.bucket-index.enabled=true
      -blocks-storage.bucket-store.index-header-lazy-loading-enabled=true
      -tenant-federation.enabled=true
    deploy:
      mode: replicated
      replicas: 3

  ruler:
    image: cortex_cortex_v1.16
    networks:
      - cortex-network
    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "5"
    ports:
      - "9001:9009"
      - "9091:9095"
    volumes:
      - /srv/certs:/etc/cortex/certs
    command:
      -server.http-listen-port=9009
      -target=ruler
      -api.response-compression-enabled=true
      -server.http-tls-cert-path=/etc/cortex/certs/server.cer
      -server.http-tls-key-path=/etc/cortex/certs/server.key
      -blocks-storage.backend=s3
      -auth.enabled=true
      -blocks-storage.s3.bucket-name=cortex
      -blocks-storage.s3.endpoint=s3
      -blocks-storage.s3.access-key-id=access
      -blocks-storage.s3.secret-access-key=password
      -store.engine=blocks
      -blocks-storage.bucket-store.sync-dir=/data/tsdb
      -ring.store=consul
      -consul.hostname=consul-client:8500
      -distributor.extend-writes=true
      -distributor.health-check-ingesters=true
      -distributor.replication-factor=3
      -distributor.shard-by-all-labels=true
      -experimental.ruler.enable-api=true
      -querier.query-ingesters-within=13h
      -querier.query-store-after=12h
      -ruler-storage.backend=s3
      -ruler-storage.s3.bucket-name=ruler
      -ruler-storage.s3.endpoint=s3
      -ruler-storage.s3.secret-access-key=password
      -ruler-storage.s3.access-key-id=access
      -ruler.enable-sharding=true
      -ruler.ring.store=consul
      -ruler.ring.consul.hostname=consul-client:8500
      -store-gateway.sharding-enabled=true
      -store-gateway.sharding-ring.consul.hostname=consul-client:8500
      -store-gateway.sharding-ring.replication-factor=3
      -store-gateway.sharding-ring.store=consul
      -ruler.alertmanager-url=https://alertmanager.tst.tehik.ee/alertmanager
      -store.engine=blocks
      -tenant-federation.enabled=true
    deploy:
      mode: replicated
      replicas: 3


  alertmanager:
    image: cortex_cortex_v1.16
    networks:
      - cortex-network
    ports:
      - "9000:9009"
      - "9090:9095"

    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "5"
    volumes:
      - /srv/certs:/etc/cortex/certs
      - alertmanager-data:/data
      - /srv/alertmanager/:/etc/cortex/alertmanager/
    command:
      -server.http-listen-port=9009
      -target=alertmanager
      -auth.enabled=true
      -alertmanager-storage.backend=s3
      -server.http-tls-cert-path=/etc/cortex/certs/server.cer
      -server.http-tls-key-path=/etc/cortex/certs/server.key
      -alertmanager-storage.s3.secret-access-key=password
      -alertmanager-storage.s3.access-key-id=access
      -alertmanager-storage.s3.endpoint=s3
      -alertmanager-storage.s3.bucket-name=alertmanager-test
      -alertmanager.storage.path=/data
      -alertmanager.web.external-url=/alertmanager
      -alertmanager.configs.fallback=/etc/cortex/alertmanager/alertmanager.yml
      -experimental.alertmanager.enable-api=true
      -alertmanager.sharding-enabled=true
      -alertmanager.sharding-ring.store=consul
      -alertmanager.sharding-ring.consul.hostname=consul-client:8500
      -tenant-federation.enabled=true
    deploy:
      mode: replicated
      replicas: 3


  compactor:
    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "5"
    image: cortex_cortex_v1.16
    networks:
      - cortex-network
    ports:
      - "9006:9009"
      - "9096:9095"
    volumes:
      - /srv/certs:/etc/cortex/certs
      - compactor-data:/data
    command:
      -server.http-listen-port=9009
      -target=compactor
      -blocks-storage.backend=s3
      -server.http-tls-cert-path=/etc/cortex/certs/server.cer
      -server.http-tls-key-path=/etc/cortex/certs/server.key
      -blocks-storage.s3.bucket-name=cortex
      -auth.enabled=true
      -blocks-storage.s3.endpoint=s3
      -blocks-storage.s3.access-key-id=access
      -blocks-storage.s3.secret-access-key=password
      -compactor.sharding-enabled=true
      -compactor.ring.store=consul
      -compactor.ring.prefix=collectors/
      -compactor.ring.consul.hostname=consul-client:8500
      -store.engine=blocks
      -compactor.data-dir=/data
      -compactor.block-ranges=2h,12h,24h
      -compactor.blocks-retention-period=90d
      -compactor.cleanup-interval=15m
      -compactor.compaction-concurrency=1
      -compactor.compaction-interval=5m
      -tenant-federation.enabled=true
      -compactor.cleanup-concurrency=20
    deploy:
      mode: replicated
      replicas: 7


networks:
  cortex-network:
    name: cortex-network
    attachable: true

volumes:
  ingester-data:
  store-gateway-data:
  compactor-data:
  alertmanager-data:

Please if you need more information, just ask. And i would rly appreciate if you take a look on my config as overall and make suggestions to make something better, maybe i have missed something important.

@alanprot
Copy link
Member

Do you still have the bad TSDBs?
If you still have can u try to restore it on another ingester and see if the problem is still there? It would nice to have a peek on those TSDBs.

@Marcusfve
Copy link
Author

Marcusfve commented Dec 15, 2023

I do not have the bad TSDBs anymore, but as i said i was able to fix this issue by cleaning ingester volume, maybe this can give you some hint.

@friedrichg
Copy link
Member

Very related to #5419
I have never seen this, If this happens again we need a new report based on latest cortex.

@friedrichg friedrichg closed this as not planned Won't fix, can't repro, duplicate, stale Nov 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants