-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run job_config handlers in new tasks #2637
Run job_config handlers in new tasks #2637
Conversation
Build failed. ✔️ pre-commit SUCCESS in 2m 16s |
0038fd1
to
6d41c24
Compare
Build failed. ❌ pre-commit FAILURE in 2m 05s |
Build succeeded. ✔️ pre-commit SUCCESS in 2m 02s |
6bc5293
to
fc19acd
Compare
Build failed. ❌ pre-commit FAILURE in 2m 08s |
f222051
to
fe2d4f1
Compare
Build succeeded. ✔️ pre-commit SUCCESS in 2m 01s |
90806f8
to
507bb5e
Compare
Build succeeded. ✔️ pre-commit SUCCESS in 2m 06s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a great improvement, thanks Maja!
@@ -55,6 +57,13 @@ | |||
logger = logging.getLogger(__name__) | |||
|
|||
|
|||
def celery_run_async(signatures: list[Signature]) -> None: | |||
logger.debug("Signatures are going to be sent to Celery (from update_copr_build_state).") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can be called also from update_testing_farm_run
, so the log should be adjusted
If we run sequentially all the handlers for all the job_configs, for every entry in the db that needs babysitting we can reach the hard time limit for the task. I did not make the vm image build handlers running in parallel, because now the code needs the tasks output. Co-authored-by: Nikola Forró <[email protected]>
507bb5e
to
b24c82c
Compare
for more information, see https://pre-commit.ci
Build succeeded. ✔️ pre-commit SUCCESS in 2m 06s |
Build succeeded (gate pipeline). ✔️ pre-commit SUCCESS in 2m 07s |
2004555
into
packit:main
There is a correlation between this new exception in sentry
and some hard time limit errors, look at the graphs.
In splunk, the above celery task id, had more than 25,528 events associated and ran for a really long time.
If we run sequentially all the handlers for all the job_configs, for every entry in the db that needs babysitting we can reach the hard time limit for the task.
If we fail to babysit the "entries in the db", the tasks will sum up leading to more "hard time limit" exceptions.