-
Notifications
You must be signed in to change notification settings - Fork 970
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sign docker images using cosign #8685
base: main
Are you sure you want to change the base?
Conversation
fd48d69
to
5abc4eb
Compare
A word of warning: I am not 100% certain I fully understand everything that the |
cc @samypr100 (as always, no obligation) |
.github/workflows/build-docker.yml
Outdated
# https://github.com/docker/buildx/issues/2407. Using a separate command to | ||
# retrieve it _now_ is better than doing it later though, as it is highly | ||
# unlikely that the digest changes in the same job with the same local docker setup. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this last sentence mean? I don't quite follow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be useful to explain, in a separate step, what the command does (e.g. outputs) and clarify that the issue is that docker buildx imagetools create
doesn't return a list of updated digests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've expanded the comment, explaining why it is being done in the same step, what the digest is needed for and what each command does.
)" | ||
echo "digest=${digest}" >> "$GITHUB_OUTPUT" | ||
|
||
- name: Sign the manifest with GitHub OIDC Token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to sign again here? Is it correct to sign twice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should clarify (maybe the title) that this signs the uv:latest
images, where as the other one signs the "extra images" (e.g. python based etc.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly; this workflow outputs more than one type of docker image. There is the plain uv image (tagged as :[MAJOR].[MINOR].[PATCH]
, with the :[MAJOR].[MINOR]
and :latest
tags being re-targeted to these), and then there are the various OS / Python version combination images, which take a base image from the image-mapping
list and copy in the uv
and uvx
commands. You need to sign each of these images separately.
The combination images are being signed in the docker-publish-extra
job, and this signing step is for the plain image, re-published after the combination images have been generated with extra annotations, to make sure it is listed at the top on the container registry page.
digest="$( | ||
docker buildx imagetools inspect \ | ||
"${UV_BASE_IMG}:${DOCKER_METADATA_OUTPUT_VERSION}" \ | ||
--format '{{json .Manifest}}' \ | ||
| jq -r '.digest' | ||
)" | ||
echo "digest=${digest}" >> "$GITHUB_OUTPUT" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this can be done in a separate step instead to keep this step less-convoluted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was specifically avoiding a separate step, because that would pose a security risk.
In theory, given enough time, a docker image tag could have been moved from one digest to another. A signature on a manifest from a trusted entity has enormous value, and if an attacker found a way to very quickly move the tag to a digest of their choice, before the signing step was being executed, they could steal that signature. This is why it is a reasonably big deal that imagetools create
doesn't output the digest in a machine-readable form the moment it creates the manifest. However, since imagetools inspect
is run within the same step, it is being executed against the same docker installation which already has the tag and manifest locally cached and so changing the tag would require the attacker to have access to the runner while this step is being executed.
If we move this part to a new step, then you introduce many more opportunities for the tag to be moved; there is a time gap, the job is likely to be handled by a different runner which will have to fetch the digest from the container registry, etc. This greatly increases the attack surface to get that tag moved to a different digest. I really don't want to increase the attack surface here.
I'll see about updating the comments here to make this clearer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the job is likely to be handled by a different runner which will have to fetch the digest from the container registry
Thanks for the explanation. In this particular case steps are still guaranteed to be executed within the same runner vm and job executor, hence separating into a separate step is still worthwhile for readability. It doesn't give us much of a difference in terms of security in this case and more importantly it does not increase the attack surface. We'd honestly have bigger issues to worry about if we can have an attacker inject themselves into the runner vm and process executing the steps. I think we can reconsider this once imagetools create
has a mechanism to return the digests.
Within the same step, we can count on the local docker cache of the tag not having changed.
This is also true for steps within the job itself. So we should update this comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this particular case steps are still guaranteed to be executed within the same runner vm and job executor
🤦 Of course they are, duh. Yeah, you are correct, and I'll make this a separate step. I was overthinking this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change pushed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Looks great
5abc4eb
to
d1a3809
Compare
d1a3809
to
2872b9a
Compare
2872b9a
to
9b162d3
Compare
Note, I'm running a release job on my fork with these changes (9b162d) as I'd like to see it working E2E |
@mjpieters signed images are here https://github.com/samypr100/uv/pkgs/container/uv in case you'd like to verify the signatures are working as expected |
cosign uses the GitHub action ID token to retrieve an ephemeral code signing certificate from Fulcio, and store the signature in the Rekor transparency log.
9b162d3
to
4a278c1
Compare
Excellent! I was working my way through understanding cargo dist in my own fork to try this out but hadn't gotten as far as actually seeing the workflow complete. The first thing I notice is that the the .sig files are now obscuring the actual docker images on that page, which is a bummer. The signatures are valid however!
I'll have to see what can be done about those signature files in the registry. |
So cosign pushes the signature as a new 'container' to the registry, tagged with I've looked at a number of other projects that use cosign to sign their containers (of which there are a very large number), and all the ones I looked at so far show these tags at the top. E.g. the Dependabot update bundler (a Github project!) has signature tags at the top. You see this on other registries too, e.g. the boxyhq jackson container on the docker hub. I see but a few options here:
|
We're already pushing the |
You push not only a tag, but a manifest as well. You'd have to separate the two steps; push the manifest without tags and then tag the digest of the manifest afterwards. It could be that you can retag the same digest; I'll try this out in my own fork registry. |
I think the main issue here is how GH ranks packages and gives no control over the ordering to orgs/users of what's displayed, so we have to rely on these ordering hacks related to publishing. |
and from my experiments it is clear that it is order that the manifest were pushed to the registry that is used to list them. I tried deleting then restoring, and retagging (moving tags to another manifest then back again to the right manifest), and nothing shifts the signature from the top. So the only thing that would work is to build and not push, sign (and let cosign push the signature manifest), and only then push the actual image manifest. |
I've tried to use the |
If I find time to pick this up again, there is an excellent overview of why you can't use the docker/build-push-action action to separate building and pushing at NASA-PDS/devops#78 (comment), which points to containerd as the way forward, but... that's not yet an official option. |
Agreed, I believe the existing tooling is limited for these types of scenarios. |
To be clear: this PR is ready in every other way. If not being able to pin the current base release container to the top is an issue, then please close this as I don't think we can practically do that and provide signed containers. |
I'm not sure how to balance the trade-offs with the regression, it seems pretty confusing to users to not see the actual image tag they'd use. |
@mjpieters Is it possible to push the signatures (.sig files) elsewhere, e.g. different package? Maybe I missed if this was discussed. Given the GH limitations, that's probably the next step to explore? |
Yes, it is technically possible to push them to a different registry, see the cosign documentation. However I don't recommend doing this without first verifying that you can still verify the signature. I fear that the I do not currently have the time to test this option; you'd have to pull my PR into a fork repository, add the |
Thanks for the details, I will try take a look at some point. I assume this approach would introduce a new package (registry) called I also wonder, are there other projects that you know of that use ghcr.io + cosign? I'd like to see if this is a common issue and if there's other known workarounds that we may be missing when using cosign. I'll take a look as well. |
I looked, but all those that I found put their signatures straight into the GitHub container registry by their containers (and don't try and control what's that top listed entry). |
cosign uses the GitHub action ID token to retrieve an ephemeral code signing certificate from Fulcio, and store the signature in the Rekor transparency log.
Once an image has been successfully signed, you should be able to verify the signature with:
Closes #8670