-
Notifications
You must be signed in to change notification settings - Fork 363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor upgrade-crd job to not copy binaries #571
Comments
@qiuming-best Perhaps we could reconsider separate velero helm chart into two charts, one is CRD only, the other one is the rest of Velero kubernetes resources. WDYT? |
To be honest, I am simply astonished that someone thought that it was a great idea to copy binaries between containers in such a way. Are you serious? This would never have worked for a very obvious reason that target image is missing any dependencies to run a foreign, dynamically linked binary. Did anyone even test that chart before publishing? Looks like not. @mjnagel 's suggestion seems very reasonable - to keep the velero image slim, the workflow needs to be reversed: first, gererate the CRD manifests and save them to a shared location from an init container, and then apply them with kubectl from the main job container. Alternatively, CRDs may be just supplied with the Helm chart itself, or the velero binary can be made to interact with Kubernetes API directly to apply necessary resources. @jenting creating a separate chart just for CRDs is counterproductive, since it complicates the deployment process by introducing strict deploy order (which would break automatic deployment flow with tools like ArgoCD), and there are better alternatives available, which I've described above. |
Please check this #450 (comment) Perhaps we could revisit to see if it still valid or not. |
@jenting it does look like you could use the
@Feliksas moving CRDs to the chart itself would change behavior around upgrades (since Helm will ignore any CRD changes on upgrades) so I don't think that is the ideal route. The mention of velero CLI changes is a good one - I think that's a viable alternative path as mentioned in my issue. |
Describe the problem/challenge you have
When using the velero helm chart I have strict requirements to deploy with specific images in my environment. Some of these images are slimmed down/hardened and don't match the exact way this chart is setup. Specifically I am encountering an issue with the upgrade-crd job when it copies
sh
andkubectl
into a shared volume - due to missing glibc (one of the images is alpine based).While this is very much a unique issue to my environment (and working through changes to the image to make this work) it did make me question the way a shell and kubectl binaries are being copied around given architecture and other concerns with copying binaries.
Describe the solution you'd like
If the upgrade-crd job were re-architected/re-ordered:
This would effectively just change what is being copied using the volume - rather than binaries it would just be copies of the CRD manifests. This seems like it would reduce some of the complexity and strangeness here in how this job runs.
Anything else you would like to add:
I could also see this being done "one layer up" if the velero CLI had an option to force apply/upgrade when running
velero install --crds-only
. This may actually be the best option as it would drop the need for the shared volume/extra container entirely, and allow the velero image to stay slim with no shell/kubectl. It appears that the current behavior is tailored to install only and skips CRDs if they already exist?The text was updated successfully, but these errors were encountered: