Replies: 33 comments 69 replies
-
|
I am facing with the same situation (1 dropped out of 5) for |
Beta Was this translation helpful? Give feedback.
-
|
Also having the same problem here, with the Actions Runner Controller. I suspect the API is having issues (the managed runners are behaving similarly poorly) but the status page is not really reflecting reality. |
Beta Was this translation helpful? Give feedback.
-
|
I am also having exactly same problem.Please can someone advise? |
Beta Was this translation helpful? Give feedback.
-
|
any updates, this issue is blocking a lot of things for us |
Beta Was this translation helpful? Give feedback.
-
|
Experiencing the same thing, restarting the runner service doesn't change anything, recreating the runner fixes the issue, but it's a hassle if the runners keep going into idle state and have to be recreated often. Still don't know the cause. |
Beta Was this translation helpful? Give feedback.
-
|
I think it was working fine until version 2.304 because there were no issues with that version.Github forces updating the version even if it is not tested properly |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
|
Recreating a runner makes other runners pick up the jobs automatically. This is very difficult to understand |
Beta Was this translation helpful? Give feedback.
-
|
2 runners started to pick up jobs now. So instead of 4 out of 8 not working, it improved to 2 out of 8 not working, but I didn't change anything. Probably GitHub is working on something to fix it. The runners are updated to |
Beta Was this translation helpful? Give feedback.
-
|
I think it was working fine until version 2.304 because there were no issues with that version.Github forces updating the version even if it is not tested properly |
Beta Was this translation helpful? Give feedback.
-
|
Same error here, after (auto) updating from 2.315 to 2.316. Reinstalling runners (Current runner version: '2.316.0') works and now they are processing jobs again. |
Beta Was this translation helpful? Give feedback.
-
|
@jcahigal but it doesn't last long, it would again fail to pick up jobs if you run it again |
Beta Was this translation helpful? Give feedback.
-
|
Do you guys use these self-hosted runners most of the time? |
Beta Was this translation helpful? Give feedback.
-
|
I have 105 self-hosted runners most running continuously every day. I had two waves of runners going idle, probably related to when they updated. All seems to be in order after the second round of recreating the runners. All runners are currently working as expected. |
Beta Was this translation helpful? Give feedback.
-
|
Have this on 2.317.0, sometime around late June our runner have just stopped picking up jobs. The repo is private, all config.sh checks pass, removing and adding the runner again did not help. |
Beta Was this translation helpful? Give feedback.
-
|
Experiencing this in March 2025. |
Beta Was this translation helpful? Give feedback.
-
|
What the heck, I was like surely this issue has been resolved since Apr 2024. I have this same issue. Why isnt github replying ?? Dont they have an active community manager or discussion manager ? |
Beta Was this translation helpful? Give feedback.
-
|
I'm also experiencing the same issue when running a single self hosted runner on ubuntu 24.04. I am running it as systemd service. I notice that if I have a number of jobs queued they all get picked up and run. As soon as there some idle time then it stops picking up jobs. I have to restart the systemctl service everytime I want it to start picking up the jobs again |
Beta Was this translation helpful? Give feedback.
-
|
I have the same issue. I uninstalled the services, reinstalled them, registered the runner again, and also changed the labels, but no luck. Every time, I need to restart for it to pick up the job. This has been happening for the last two weeks; otherwise, it has worked well for three years. Do we have a solution? I’m not keeping track of who made changes, and I have to restart the service. Thank you. |
Beta Was this translation helpful? Give feedback.
-
|
It is also happening to me. We have just one self-hosted repo who is not picking up jobs if I don't restart the service. |
Beta Was this translation helpful? Give feedback.
-
|
Same here... Triggering new workflows sometimes seems to get the old ones out of the queue stage and starts them. |
Beta Was this translation helpful? Give feedback.
-
|
Same here, I have to restart the runner action. Github status page claims "all systems operational" but besides here I dont see a submit this as a formal issue. @queenofcorgis can you help us please @ |
Beta Was this translation helpful? Give feedback.
-
|
Same here on v2.322.0, it is really disrupting. Hope that this will be fixed soon @github |
Beta Was this translation helpful? Give feedback.
-
|
Same here on 2.322.0, newly created runner, any workaround? |
Beta Was this translation helpful? Give feedback.
-
|
Hasn't reproduced for me for at least a week now |
Beta Was this translation helpful? Give feedback.
-
|
I'm still having issues with this. Using the k8s/helm chart deployment, workers not scaling despite tasks in the queue. Manually assigning a minWorker count results in tasks being picked up and processed. |
Beta Was this translation helpful? Give feedback.
-
[FIXED]It took me about 3 hours to realize the issue. After adding the label as the runner name, the job was picked up automatically, and everything worked as expected. Issue: Identical Runner Names ❌When all runner names were the same, jobs remained queued and did not execute. Example (without unique labels):- dev: runs-on: [self-hosted, Linux, X64, dev]
- staging: runs-on: [self-hosted, Linux, X64, staging]
- production: runs-on: [self-hosted, Linux, X64, production] Solution: Unique Labels for Each Runner ✅After adding unique labels to each runner, jobs were picked up correctly, and everything worked as expected. Example (with unique labels, same as above):- dev: runs-on: [self-hosted, Linux, X64, dev]
- staging: runs-on: [self-hosted, Linux, X64, staging]
- production: runs-on: [self-hosted, Linux, X64, production] ConclusionIf using self-hosted runners, ensure each runner has a distinct label to avoid jobs being stuck in the queue. |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I'm still experiencing this. I have only one self hosted runner. Not a lot of helpful information here. |
Beta Was this translation helpful? Give feedback.
-
|
I just found that if I change the organization-based runner to the repo-based runner, than it will work. Maybe it's because I am using the enterprise GitHub and somehow I don't really have the access to my self-hosted runner!? |
Beta Was this translation helpful? Give feedback.
-
|
For whoever scrolled to the bottom: for me, the reason was that I was using |
Beta Was this translation helpful? Give feedback.



Uh oh!
There was an error while loading. Please reload this page.
-
Select Topic Area
Bug
Body
Problem
I have 8 self-hosted runners that are configured the same. 4 of them started to stop working yesterday (22 Apr) around 08:44 UTC.
When a workflow run, 4 of the workers are active picking up jobs, the other 4 stays idle as if there is no job. The 4 runners are not offline, they are marked as online, connected, idle, yet not picking up jobs. The expected behaviour is all 8 of them can pick up jobs.
It is not related to any concurrency limit, it's consistently that specific 4 runners not picking up the jobs across different repos in the organization.
What it looks like
The runners page in the organization, 4 active, 4 idle:
But at the same time, there are plenty of checks waiting for runner to pick up:
About the runners
2.315.0self-hostedand in the same runner groupDefaultI couldn't figure out why 4 of them works, why 4 of them suddenly stopped working.
What I have tried to fix it
sudo ./svc.sh stopand thensudo ./svc.sh start. This does mark the service as offline and then idle again, but still not picking up jobssudo reboot, similarly the service is offline then online but not picking up jobs./run.sh --check --url <org_url> --pat <my_pat>, all passedChecking the runner logs in
_diagdirectory, only this error since it stopped workingChecking the last worker logs in
_diagdirectory, nothing specialChecking
journalctl, no errors logged, as if there is no new job ever requestedChecking their versions,
2.315.0, all at the latest versionDocker is installed and active (
sudo systemctl is-active docker.service)Anyone faced this before? Any clue what else can I check? Would this be a bug in GitHub side? Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions