-
Notifications
You must be signed in to change notification settings - Fork 40
Open
Description
We are currently exploring scaling out (downsize from 8GB ram nodes to 4GB ram nodes) our kubernetes cluster and we noticed that theres quite the variation in core-dump-handler pods memory usages.
core-dump-handler-8g52g 0m 130Mi
core-dump-handler-9vwk9 0m 58Mi
core-dump-handler-jw5fv 0m 86Mi
core-dump-handler-q9hz5 0m 21Mi
core-dump-handler-v98sh 0m 102Mi
core-dump-handler-vd7kd 0m 14Mi
core-dump-handler-vsv2r 0m 38Mi
core-dump-handler-zgvjz 0m 114Mi
This appears to be the case that handlers that have handled a crash have significantly higher memory usage than those that have not.
Could this be resolved?
Metadata
Metadata
Assignees
Labels
No labels