Replies: 70 comments 29 replies
-
Hi @Shereef - thanks much for the feedback. Our initial design was focused on deployment scenarios, but we want to address the use case that you've outlined. I appreciate you outlining this so nicely and providing these suggestions. I hope that this is an area that we can address in the future. Thanks again. |
Beta Was this translation helpful? Give feedback.
-
The desired behavior here, I believe, would be for jobs with the same |
Beta Was this translation helpful? Give feedback.
-
I think I just need the same, but add to a queue instead of cancel the workflow. I meant, don't run un parallel but in serial instead This will help, for example, use the same build result or cache in multiple runs for the same commit. The first one will create the artifacts/cache and the next one will be faster |
Beta Was this translation helpful? Give feedback.
-
Was surprised to see that canceling previous runs was the only option officially supported -- waiting and executing sequentially is what I need and seems to me like a clear usecase (canceling a deployment could result in a broken state) So far I've found this Action: https://github.com/ahmadnassri/action-workflow-queue But ideally, workflows sitting waiting for previous workflows to finish wouldn't consume action minutes in an official feature. |
Beta Was this translation helpful? Give feedback.
-
Would be great to be able to queue workflow runs (ideally without consuming action minutes). Was surprised that new runs cancel existing ones by default. |
Beta Was this translation helpful? Give feedback.
-
We hit this limitation while testing out the Merge Queue feature that is currently in limited beta. Our use case is ensuring that only one workflow is running for the PR at the top of the merge queue. When concurrency is set at the workflow level (for the merge_group event type) and multiple PRs are queued within the merge queue, the system cancels queued check suites once a new PR is added to the queue. It would be fantastic if we had the ability to disable canceling queued jobs! |
Beta Was this translation helpful? Give feedback.
-
If you're using self-hosted runners, a workaround/hack we used was to label one of these runners specifically, and then make all the jobs that need to be in a concurrency group just run on this labelled runner instead. No concurrency shenanigans. That causes them to queue up globally across all workflows without any caps on number of waiting jobs. The only major limitation is that if this runner fails, these jobs won't use any other runner. That was considered an acceptable risk to unblock us until we had a better solution. |
Beta Was this translation helpful? Give feedback.
-
My team also needs this feature. |
Beta Was this translation helpful? Give feedback.
-
My team also needs this. |
Beta Was this translation helpful? Give feedback.
-
My team also need this feature - mainly for queueing mulitple actions each with Terraform plan/deploy steps which cannot be run at same time due to a lock on terraform state. |
Beta Was this translation helpful? Give feedback.
-
A must have for my team |
Beta Was this translation helpful? Give feedback.
-
Really needing this to be implemented. I am currently using https://github.com/ben-z/gh-action-mutex for now, but this does pollute the git history. |
Beta Was this translation helpful? Give feedback.
-
Need this feature too. |
Beta Was this translation helpful? Give feedback.
-
GitHub, I'll add my team's need for this feature as well. I see a LONG list of replies like this, with no response from GitHub. (I'm in DevOps.) I'm going to put in a GitHub support ticket referencing this request to see if there are any plans to implement it. Our use cases is that I have created a highly parallelized building and deploying four distinct services in a monorepo. In this workflow, the update_service action is called by the jobs for each of those services, AND we have a very active CI/CD environment which can be processing several merges almost simultaneously. While I am discussing the use of the current concurrency model (cancel all pending workflow runs except the latest), until and unless that gets the green light our workflow runs will continue to risk colliding and corrupting the shared deployment environment with competing service updates. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the feature request, we need this feature |
Beta Was this translation helpful? Give feedback.
-
+1 Our use case:
Controlling the queue and allowing more pending jobs would be one way to get around this. We're investigating other options |
Beta Was this translation helpful? Give feedback.
-
Any news???
the solutions using concurrency would be: concurrency: |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
I voted on the issue, but the flood of plus one comments seems pretty impressive too. So, "hey, I want multiple overlapping jobs to queue and run serially instead of terminating" as well. I'm pretty sure this is possible, because one job will queue and multiple jobs waiting on runners will queue. So all the necessary logic seems like it's essentially present somewhere. 😅 |
Beta Was this translation helpful? Give feedback.
-
The more I try use GitHub actions for advanced use cases the more I stumble onto community conversations like this.
Yet, years later and still... nothing... Not so much as a reply with whether the feature is planned, on the roadmap at all or if this is something we should just learn to live with. |
Beta Was this translation helpful? Give feedback.
-
Hey, sorry the Actions team has been quiet on this. We have added this to the list of paper cuts we are hoping to address in the next 12 months :) I don't have a timeframe for this one, but I want to say we know it's a pain point and do have it on our list <3 |
Beta Was this translation helpful? Give feedback.
-
Good to see a reply, @nebuk89 . This is really a LOT more than a paper cut; it's a fact that many workflows are incrementally changing/updating things like databases, and MUST run in sequence. The current design presumes that later workflows are just an addition to the same codebase, and so are expected to produce the same result whether or not an intervening run is cancelled. |
Beta Was this translation helpful? Give feedback.
-
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
Hi all! I have a more complex desired behavior: We have a step that deploys with terraform. We want that if a running job already reached that step, the other ones should wait instead of killing it. Ex:
Deployment succeeded, version |
Beta Was this translation helpful? Give feedback.
-
Per original request: my team needs something like
Motivating ExampleI have a 'deploy' GitHub Workflow that takes 600 seconds to run a deployment.
Requirements
Expected Behavior
Documented Behavior
|
Beta Was this translation helpful? Give feedback.
-
Current behavior
Desired behavior
Sorry if this was discussed before I tried to search but didn't find anything.
Our current implementation
To give it more context:
Imagine this scenario:
We have 5 PRs for Team A
We merge to main and update the 5 PRs
All 5 will want to deploy to the env + the main branch deploy that is ongoing
the last PR to merge is going to be queued
all other PRs will be cancelled and have to be retried manually
Suggested solutions
one of the below would work
cancel-in-queue: false
a new command to disable cancelling queuing jobscancel-in-progress: false
would not cancel anything and leave everything in queueBeta Was this translation helpful? Give feedback.
All reactions