-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman volume SELinux relabeling causes slow startup with large amount of packages #508
Comments
Here are some numbers I gathered from an environment of mine. It has 246 channels and 154 repositories synced.
|
Thank you for the report and the detailed data you have shared. From the data you have shared, this could be a issue for a large-scale environment for sure. We will analyze it. |
Thanks for your analysis! This is strange because the documentation reports:
Are you sure in your case a relabeling wasn't really required? |
First time relabeling was understandably required and I could see the labels change. The default Podman seems to still do relabeling or some kind of checks after every restart of Here are some more details from another environment with just ~250GBs of packages in
Rebooting causes the inode, page and dentry caches to be dropped. The caches can be dropped manually with NOTE: Removing the I can do more tests and gather more info later this week from the other environment if needed. |
Very large Uyuni environments may suffer from slow (re)starts due to the way
uyuni-server.service
handles container initiation.With the changes introduced in PR #451, Podman will have to ensure that the SELinux labels for each volume are correct every time the service is started (because the container is non-persistent). This may be extremely slow with environments which have several terabytes worth of packages synchronized and/or slow disks.
DISCLAIMER: I have only experimented with AL9 & RHEL9 as the Podman host OS and custom volumes NOT set up with
mgr-storage-server
. I understand if the Uyuni Project does not want to support such environments.Workaround
WARNING: This change will not persist across container upgrades or re-installs and may have security implications!
A non-persistent workaround can be applied to speed up the start up of the container. Just remove the
:z
suffix from each (large/slow) Podman volume in/etc/systemd/system/uyuni-server.service
and do asystemctl daemon-reload
to apply the changes.You may additionally have to manually relabel each volume if SELinux prevents access to some files:
chcon -R -t container_file_t /path/to/volume
NOTE: The
--security-opt label=disable
option doesn't seem to help and relabeling is still happening at least on RHEL9 hosts when:z
suffix is present. Not sure if I'm just doing something wrong.Proposal
I think the SELinux relabeling could be managed by
mgradm
to accommodate large environments and environments with custom volume mounts. This would mean dropping:z
from the volume definitions in theuyuni-server.service
unit file as far as I understand.Perhaps a quick sanity check via
ExecStartPre
would be wise? It could be a check which goes through all of the volumes' directories (with a limited depth to speed up the process) and ensure at least the root directories of the volumes have correct permissions/ownership and labels.In case of a mismatch, the whole volume would be then relabeled (this could cause the Systemd service start to timeout!).
This would mimic the way Kubernetes can be configured to speed up SELinux label and permission/ownership checks (fsGroupChangePolicy: OnRootMismatch).
An additional "fix SELinux labels" command could be added to
mgradm
as well to offer an easy way to manually fix labels across all Uyuni related volumes in case some files are somehow incorrectly labeled.I would be interested in implementing such features if this is something you would be interested in adding to uyuni-tools.
Sources and more information
The text was updated successfully, but these errors were encountered: