feat: expose AMI cache TTL as runtime flag#9052
feat: expose AMI cache TTL as runtime flag#9052chrisdoherty4 wants to merge 1 commit intoaws:mainfrom
Conversation
|
Looking deeper it seems a handful of reconcilers set a shorter TTL than the minimum requeue time for the AMI reconciler making the The cache TTL configurability does help reduce the API calls so that still feels like a worth while configuration option - longer cache windows are acceptable in our case. |
a25243a to
d3b7986
Compare
|
Modified the PR to only expose AMI cache TTL. Being able to tweak this for our use case greatly improves API calls and avoids hitting rate limits. |
Operators running large fleets can generate significant DescribeImages API call volume due to frequent AMI reconciles. This change makes the AMI cache TTL configurable so operators can tune them for their workload without rebuilding. --ami-cache-ttl (env: AMI_CACHE_TTL, default: 1m) Default preserve existing behaviour.
d3b7986 to
a1f37c7
Compare
|
We generally avoid surfacing too much config if we can avoid it. What were you going to set this to? We might just up the default, 1m seems a bit low for a default |
Either 15m or 1h. We haven't decided and the flexibility is what would let us tweak things. I'm curious what problems there are with surfacing the configuration assuming its a sane default and well documented? |
|
Turns out this seems to be a regression somewhere between 1.8.1 and 1.10. The jumps here are when I deployed v1.10.
Opened #9063 |
|
I wanted to float an alternative approach to solving this issue that we've discussed internally. We're hesitant to expose cache TTL configurations directly for a couple of reasons:
An alternative we could consider is surfacing per-API client side rate-limit buckets as a configuration. I believe this more directly addresses the core issue - limiting the impact of individual Karpenter controllers - while also not exposing internal implementation details. All internal reconcilers need to be tolerant to rate limiting whether it's from the client or from the server. |
Hi @jmdeal. Expressing this as client side rate limiting would work for us, thanks. |
|
Hi @chrisdoherty4, Just checking back in on this - is this something you'd be interested in working on? If not then we can also work on this on our side. Thanks! |
|
@ryan-mist I'm not planning to implement anything. |

Fixes #N/A
Description
Operators running large fleets (15,000 nodes across 50+ clusters) with 10s of node classes can generate significant
DescribeImagesAPI call volume because the reconciler requeues periodically (order of 30s-1m) and uses a hardcoded 1-minute cache TTL. This change makes the cache TTL independently configurable so users can decide an appropriate AMI cache time for their usecase:--ami-cache-ttlAMI_CACHE_TTL1mDefault preserve existing behavior.
How was this change tested?
pkg/operator/options/suite_test.gocovering CLIflag override and env var fallback, and validation rejection of non-positive values.
Does this change impact docs?