-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable remote tasks to be run in cluster #1216
Conversation
e09b4e7
to
500e20d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found what seems to be one major issue where the task essentially becomes a noop in the local case.
Also since the buildah-remote
task is now gaining local capabilities, it may be desirable to stop using the task-generator
script and instead switch to maintaining the generated code directly. We can do this under the assumption that the remote-capable task will eventually replace the local-only task. Doing this will IMO simplify maintenance going forward.
500e20d
to
a724511
Compare
+1, the task-generator is quickly becoming very difficult to understand (the diff here is kind of impossible to review) Can we instead add the remote capabilities to regular
|
We certainly can consolidate the tasks. Changes like the one that I proposed in #1185 will become much simpler as well. If we consolidate, however, we will have to ensure that we support builds when the multi-platform-controller is not enabled. I don't think that we will be able to consolidate before the controller supports local builds. |
a724511
to
e3da7bd
Compare
71e0059
to
de9fc4b
Compare
bc9027e
to
5e0b00f
Compare
b569064
to
7c63f42
Compare
7c63f42
to
06514d8
Compare
26af9a4
to
3a8c8b2
Compare
f4e5b34
to
33b2df7
Compare
33b2df7
to
957d109
Compare
Ah, so we can't really replace replace the regular buildah task with this one if we don't get rid of this requirement. The basic buildah use case shouldn't require multi-platform-controller |
Another problem for unifying the tasks is that the user doesn't get to decide whether to build in-cluster or on a VM - the multi-platform-controller config does. |
Now that I think about it, won't these changes together with https://github.com/redhat-appstudio/infra-deployments/pull/4329/files break the option to run x86 builds on a VM? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks reasonable, my main concern is if we're not removing the option to run x86 builds on a VM (#1216 (comment))
The user decides what platform they want to build by specifying the parameter. The configuration determines whether that should happen locally or on a VM if necessary. If it doesn't matter whether builds happen locally or remotely, then this implementation is left up to the system. One challenge for the host configs/MPC controller will be to identify the best way to migrate workloads from a VM-configured platform to a local-configured platform (or vice versa). I think that these challenges exist outside of the scope of the currently proposed changes to buildah-remote. For now, there are only AMD nodes in clusters, but there might be ARM nodes on the cluster in the future. With the changes in redhat-appstudio/infra-deployments/pull/4329, users can still run x86 builds on a VM, they just need to select one of the configured non-local hosts (i.e. using any of the Today, |
Ah right, it's amd64, not x86_64. |
957d109
to
0d07675
Compare
By default, we should run builds matching the local architecture in-cluster to reduce the overhead of provisioning platforms. This will enable a fully matrixed build for all images using only the remote builds. This change will require the multi-platform controller to set the /ssh/host to localhost in order for the builds to run in-cluster. In a change from the prior behavior, we will now append a sanitized version of the entire PLATFORM to the image tag upon request. We will no longer try to extract just the arch from the PLATFORM as all platforms may not follow the `os/arch` pattern. By appending the entire PLATFORM, we remove any dependency on how local/remote platforms are configured. This behavior is now needs to be explicitly requested. Signed-off-by: arewm <[email protected]>
Updating the image used for the remote tasks resulted in a bump in the buildah version. This includes containers/common@08fc0b450 which no longer sets the repository or tag when pulling without the optional name. To properly populate the image in the container's registry, we need to push and pull it with the optional name. Signed-off-by: arewm <[email protected]>
0d07675
to
49c8395
Compare
They still get to decide, by a platform name that is explicit about it such as |
By default, we should run builds matching the local architecture in-cluster to reduce the overhead of provisioning platforms. This will enable a fully matrixed build for all images using only the remote builds. This change will require the multi-platform controller to set the /ssh/host to localhost in order for the builds to run in-cluster.
In a change from the prior behavior of auto-appending the architecture to the tag, we will now append a sanitized version of the entire PLATFORM to the image tag upon request. This behavior is now needs to be explicitly requested which fixes a bug where users cannot specify the specific tag desired when using a remote build.
NOTE: There are not currently any local platforms configured. These need to be added before this PR have an effect in production.