-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug 1597598 - use taskgraph #226
Conversation
Wip is in https://github.com/escapewindow/staging-scriptworker-scripts/commits/master . I figured rebasing all into one commit was cleaner. I can split this up more if preferred. |
@tomprince , @Callek are you up for review? |
Ah, I may need to add a |
If you look at https://firefoxci.taskcluster-artifacts.net/UmJBeMDlQuGOk1q9f35kEQ/0/public/label-to-taskid.json , you see all the tasks that would have been scheduled, but are using |
Hm. Not sure why the PR status isn't being reported. This may be due to github repo settings that assume we're using non-taskgraph checks? |
The error above seems to be regarding the merge to my fork; my fork doesn't have the right scopes. Most likely I need to either add scopes to my fork, or turn off the tc integration there. Edit: did the latter. |
@escapewindow I think so. If I remember correctly, I've hit this too. That's because FirefoxCI TC-github listens to push events on the repo. We either grant both your fork and the main repo the same level of scopes or turn off the integration on the fork to not confuse the listener. |
taskcluster/scriptworker_taskgraph/transforms/python_version.py
Outdated
Show resolved
Hide resolved
taskcluster/scriptworker_taskgraph/transforms/python_version.py
Outdated
Show resolved
Hide resolved
@MihaiTabara @JohanLorenzo I've tweaked the |
LGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to me overall, nice job! 👍 I'm glad taskgraph has some use for projects like scriptworker-scripts! I think it's quite suited because of the repetition between projects.
I found a few things that can be improved in followed up as well as minor nits. I don't see anything that immediately blocks this PR.
job-defaults: | ||
description: "{name} k8s image builder py{python_version}" | ||
name: "{name}-python{python_version}" | ||
run-on-tasks-for: ["action", "github-pull-request", "github-push"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: I've never used action
in run-on-tasks-for
before. What's the expected behavior, in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're probably going to use a different filter for actions, so it's possible this is extraneous. I originally used it as a way to filter for relpro, but we can also specifically filter for relpro elsewhere.
# TODO copy image to artifacts | ||
|
||
jobs: | ||
addonscript: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Followup: I wonder if we should create a dedicated loader to avoid repeating each subproject here. For instance, there are more than a hundred subprojects all defined in this file. This loader reads that file, generates the right jobs and applies the default configuration before passing jobs to transforms.
I see we repeat the list of project here and in another kind so maybe we want a dedicated loader too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, possibly. We could add configuration as to whether we want tox, which versions of python, and whether we want a k8s image (and which versions of python). I think the current breakdown works too.
force_push = False | ||
for prefix, tag in PROJECT_SPECIFIC_PREFIXES.items(): | ||
if parameters["head_ref"].startswith(prefix): | ||
parameters["script_name"] = parameters["head_ref"].replace(prefix, "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Followup: I wonder if we should bail out here when script_name
doesn't match any known script. That will prevent the decision task to be green while it scheduled nothing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. I'm hoping we follow up with relpro and don't rely on this dev-/production-scriptname behavior for much longer, but that's a good sanity check if we keep this behavior.
for field in fields: | ||
_resolve_replace_string(task, field, subs) | ||
task["attributes"]["python-version"] = python_version | ||
yield task |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Followup: I wonder if the python version logic shouldn't be part of the dedicated loader. In android-components we also have 2 dimensions:
- Subprojects
- Build types (regular, nightly, release)
The specific loader handles the latter there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could do. If we add a non-python script, we either have to adjust the transforms or the loader, depending on which solution we go with. Which would make more sense?
Edit: thinking in this case, the transforms would make more sense. The tox
kind is python-specific, so no changes there. k8s-image
would have to support both python and non-python, so the python_version
transforms could pass any non-python tasks through unchanged, and the rust_version
(or other) transforms could pass any non-rust tasks through unchanged. Putting all of this in the loader might be a bit much, but it might be doable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not clear to me that the python-version transform is providing value for the k8s-image
kind in any case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could see us creating k8s images for multiple python versions at some point, especially after we drop support for docker hub. I would like to land this PR at some point sooner rather than later, as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I definitely wasn't suggesting changing anything here, just commenting on the future consideration with non-python things
Co-authored-by: Johan Lorenzo <[email protected]>
Co-authored-by: Johan Lorenzo <[email protected]>
Fixes #140 .
This ports the existing tox and dockerhub tasks running on scriptworker-scripts to taskgraph, with the following changes:
cached_tasks
so we only rerun tox and dockerhub tasks if something in theirdigest-directories
change;dev-*
andproduction-*
pushes, we force-rerun the appropriate dockerhub taskchain-of-trust
for future verificationWe're still using secrets to publish docker images to dockerhub, using the old logic. After this lands, we can follow up by having cloudops download an artifact from the task, with CoT verification. Also, we can modernize our k8s-image task with either gecko-type docker-image logic, or Tom's new non-dind hotness.
We can also follow up with: