Skip to content

Conversation

@jonmcewen
Copy link
Contributor

@jonmcewen jonmcewen commented Dec 23, 2024

Description

Remove min_size and max_size from ignore_changes from the ASG for Docker Autoscaler. The reasoning described in the comment was poor; min_size is fixed at zero, so won't change, and max_size should change when runner_worker.max_jobs is changed, and will not cause immediate scale-up. If it causes scale-down, this the logical desired behaviour.

Remove ignore_changes from the ASG for Docker Autoscaler.  The reasoning
described in the comment was poor; min_size and desired_capacity are fixed
at zero, so won't change, and max_size should change when runner_worker.max_jobs
is changed, and will not cause immediate scale-up.
If it causes scale-down, this the logical desired behaviour.
@github-actions
Copy link
Contributor

Hey @jonmcewen! 👋

Thank you for your contribution to the project. Please refer to the contribution rules for a quick overview of the process.

Make sure that this PR clearly explains:

  • the problem being solved
  • the best way a reviewer and you can test your changes

With submitting this PR you confirm that you hold the rights of the code added and agree that it will published under this LICENSE.

The following ChatOps commands are supported:

  • /help: notifies a maintainer to help you out

Simply add a comment with the command in the first line. If you need to pass more information, separate it with a blank line from the command.

This message was generated automatically. You are welcome to improve it.

@jonmcewen jonmcewen changed the title Allow changes to "runner_worker.max_jobs" for Docker Autoscaler fix: Allow changes to "runner_worker.max_jobs" for Docker Autoscaler Dec 23, 2024
@PreNoob
Copy link

PreNoob commented Jan 10, 2025

Just wondering why an ignore changes on desired_capacity not required? If the desired capacity of the ASG is set to 3 by the fleeting plugin and the module is applied again, this would result in an immediate scale down, won't it?
For the other values I agree with you :)

@kayman-mk
Copy link
Collaborator

Just wondering why an ignore changes on desired_capacity not required? If the desired capacity of the ASG is set to 3 by the fleeting plugin and the module is applied again, this would result in an immediate scale down, won't it? For the other values I agree with you :)

That came into my mind too.

@jonmcewen jonmcewen changed the title fix: Allow changes to "runner_worker.max_jobs" for Docker Autoscaler fix: allow changes to "runner_worker.max_jobs" for Docker Autoscaler Feb 12, 2025
@jonmcewen
Copy link
Contributor Author

@kayman-mk can this be merged now please?

@kayman-mk
Copy link
Collaborator

Looks good. Could you please update the PR description to match the change? It's used as commit message.

@kayman-mk kayman-mk self-requested a review February 20, 2025 19:17
@kayman-mk kayman-mk merged commit 0624391 into cattle-ops:main Feb 20, 2025
19 checks passed
kayman-mk pushed a commit that referenced this pull request Feb 20, 2025
🤖 I have created a release *beep* *boop*
---


##
[9.0.2](9.0.1...9.0.2)
(2025-02-20)


### Bug Fixes

* allow changes to "runner_worker.max_jobs" for Docker Autoscaler
([#1221](#1221))
([0624391](0624391))
* always encrypt EBS volumes if the KMS key is given
([#1248](#1248))
([76ae944](76ae944)),
closes
[#1242](#1242)
* return security group id for docker-autoscaler in `runner_sg_id`
([#1249](#1249))
([9c573b6](9c573b6)),
closes
[#1241](#1241)

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

---------

Co-authored-by: cattle-ops-releaser-2[bot] <134548870+cattle-ops-releaser-2[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants