Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman volume SELinux relabeling causes slow startup with large amount of packages #508

Open
santeri3700 opened this issue Nov 24, 2024 · 4 comments

Comments

@santeri3700
Copy link
Contributor

Very large Uyuni environments may suffer from slow (re)starts due to the way uyuni-server.service handles container initiation.

With the changes introduced in PR #451, Podman will have to ensure that the SELinux labels for each volume are correct every time the service is started (because the container is non-persistent). This may be extremely slow with environments which have several terabytes worth of packages synchronized and/or slow disks.

DISCLAIMER: I have only experimented with AL9 & RHEL9 as the Podman host OS and custom volumes NOT set up with mgr-storage-server. I understand if the Uyuni Project does not want to support such environments.

Workaround

WARNING: This change will not persist across container upgrades or re-installs and may have security implications!

A non-persistent workaround can be applied to speed up the start up of the container. Just remove the :z suffix from each (large/slow) Podman volume in /etc/systemd/system/uyuni-server.service and do a systemctl daemon-reload to apply the changes.

You may additionally have to manually relabel each volume if SELinux prevents access to some files: chcon -R -t container_file_t /path/to/volume

NOTE: The --security-opt label=disable option doesn't seem to help and relabeling is still happening at least on RHEL9 hosts when :z suffix is present. Not sure if I'm just doing something wrong.

Proposal

I think the SELinux relabeling could be managed by mgradm to accommodate large environments and environments with custom volume mounts. This would mean dropping :z from the volume definitions in the uyuni-server.service unit file as far as I understand.

Perhaps a quick sanity check via ExecStartPre would be wise? It could be a check which goes through all of the volumes' directories (with a limited depth to speed up the process) and ensure at least the root directories of the volumes have correct permissions/ownership and labels.

In case of a mismatch, the whole volume would be then relabeled (this could cause the Systemd service start to timeout!).
This would mimic the way Kubernetes can be configured to speed up SELinux label and permission/ownership checks (fsGroupChangePolicy: OnRootMismatch).

An additional "fix SELinux labels" command could be added to mgradm as well to offer an easy way to manually fix labels across all Uyuni related volumes in case some files are somehow incorrectly labeled.

I would be interested in implementing such features if this is something you would be interested in adding to uyuni-tools.

Sources and more information

@santeri3700
Copy link
Contributor Author

Here are some numbers I gathered from an environment of mine. It has 246 channels and 154 repositories synced.
The /data/spacewalk mount is a 3TB SAS disk with ZFS which is bind mounted to the uyuni-server container as var-spacewalk.

# df -h /data/spacewalk/
Filesystem      Size  Used Avail Use% Mounted on
data-spacewalk  2.9T  2.5T  426G  86% /data/spacewalk

# time find /data/spacewalk/ -type d | wc -l
2786869

real    36m19.392s
user    0m14.585s
sys     6m46.836s

# time find /data/spacewalk/ -type f | wc -l
696235

real    29m41.111s
user    0m11.685s
sys     5m37.509s

# time chcon -R -t container_file_t /data/spacewalk/

real    45m10.621s
user    0m16.705s
sys     7m25.820s

@rjmateus
Copy link
Member

Thank you for the report and the detailed data you have shared. From the data you have shared, this could be a issue for a large-scale environment for sure. We will analyze it.
Your proposal looks good to reduce the execution time, but can lead to timeout on systemd service start as you mention.

@mbussolotto
Copy link
Member

Thanks for your analysis! This is strange because the documentation reports:

If the volume was previously relabeled with the z option, Podman is optimized to not relabel a second time.

Are you sure in your case a relabeling wasn't really required?

@santeri3700
Copy link
Contributor Author

First time relabeling was understandably required and I could see the labels change. The default TimeoutStartSec of 15 minutes was a problem however since relabeling in the previously mentioned environment takes roughly 45 minutes. I had to adjust the value to prevent the service from timing out before the relabeling was completed.

Podman seems to still do relabeling or some kind of checks after every restart of uyuni-server.service or reboot of the Podman host. Not sure if this is related to the fact that the container is non-persistent, but it has been my suspicion so far.

Here are some more details from another environment with just ~250GBs of packages in var-spacewalk.

[root@uyunihomelab ~]# podman --version
podman version 5.2.2
[root@uyunihomelab ~]# systemctl disable uyuni-server.service <---- Just to be able to measure start times manually after each reboot
...
[root@uyunihomelab ~]# reboot <--- Clears inode, page and dentry caches (makes relabeling slower at first time after boot)

[root@uyunihomelab ~]# time systemctl start uyuni-server.service

real	3m40.035s <----- Consistently 3 minutes and 40 seconds to create and start the container after a fresh boot
user	0m0.002s
sys	0m0.016s

[root@uyunihomelab ~]# systemctl stop uyuni-server.service <----- Stop and delete the container

[root@uyunihomelab ~]# reboot <--- Another reboot to clear inodes and dentries caches

[root@uyunihomelab ~]# time chcon -R -t container_file_t /data/var-spacewalk

real	3m41.610s <----- Consistently matches very close to the uyuni-server.service start time after a reboot
user	0m1.995s
sys	0m22.404s

[root@uyunihomelab ~]# time systemctl start uyuni-server.service

real	0m7.932s <-------- Significantly faster start time now that the inodes and dentries are cached
user	0m0.005s
sys	0m0.010s

[root@uyunihomelab ~]# systemctl stop uyuni-server.service

[root@uyunihomelab ~]# time chcon -R -t container_file_t /data/var-spacewalk

real	0m8.157s <------- Also significantly faster and matches very close to the uyuni-server.service start time when inodes and dentries are cached
user	0m0.923s
sys	0m7.173s

Rebooting causes the inode, page and dentry caches to be dropped. The caches can be dropped manually with echo 3 > /proc/sys/vm/drop_caches resulting in same start times and manual relabel times in my experience.

NOTE: Removing the :z suffixes results in consistent 1-3s service start times even after rebooting the Podman host.

I can do more tests and gather more info later this week from the other environment if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants