-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement systemd-style readiness notification via NOTIFY_SOCKET #401
base: master
Are you sure you want to change the base?
Conversation
I guess a proper implementation should also do credentials checking and have only one socket that is created by dinit on start? Doing this is simpler, but also all sockets in the abstract namespace are available to all processes, which may result in some unfortunate sharing. Doing so with one socket however would require some kind of non-portable checking like via cgroups. I guess using separate sockets and giving them randomized names in the abstract namespace would probably be good enough? hm... (in either case, the socket needs to live at least as long as the process, we cannot close it upon receiving initial readiness - as the process expects being able to write into it during the lifetime of the application) |
This is something of a can of worms...
So that's a good question. I'm for it in the sense of improved compatibility with existing software (services), but my opinion is that it's not a good interface. If we want to support it at all I'd prefer to go for a solution that is minimal change within dinit itself, though that may be at the expense of extra service configuration. More on this below.
It's probably not simpler, because you have to map the process ID back to the service, and readiness can be signalled by a child process of the service (not necessarily "the" process service). So systemd actually walks the process hierarchy to find the parent and identify the service that way, if it needs to, but this opens it up to a race condition where the notification is processed after the process that sent it has already terminated. That's why
One option is to not use abstract sockets, but plain old unix sockets. This has a number of advantages:
It could even be that the location (path) of the socket must be specified in the service configuration. That makes implementation as simple as possible, and is my preference for how it would be done, though it's still possible to generate a socket per service automatically.
We could probably get away without this if using standard unix sockets, one-per-service. |
my main issue with using plain old sockets is the same, i.e. permissions protect them but in the end any process running as the same user is allowed to poke its socket to signal readiness, which is kind of unfortunate, but i'm not sure how much of a problem that actually is in practice |
the way systemd gets around it is that it checks the process credentials that come with the datagram, but we can't do that, because the allowed processes should include any child process of the original service process, and there is no way to find out which ones these are - systemd knows because it has a cgroup per service |
I would guess it's not a problem in practice at all, but it does make me slightly uneasy too. |
thinking about this more, if i end up doing this at all (which i'm not 100% sure about yet), i will do what you said - i.e. use regular unix domain sockets (and add more fields to configure their permissions and ownership, similar to how the activation socket has them), tell people to use this functionality very sparingly, and require the path to be explicitly specified it should be enough for the cases where nothing else can be done to get readiness notification, and it shouldn't require anything overly complicated or non-portable... i don't really like this interface myself either, but i think it's still useful to have (but it's not something we should recommend people to use in general, it's more of a last resort thing) - it also has some advantages over the file descriptor approach (notably it lets child processes signal readiness without having to worry about passing file descriptors around, which can get complicated) but probably not enough to outweigh the overall cost... |
I updated the PR to implement things like above. Documentation still pending, but you can review what is there... |
The service may specify this with ready-notification=socket:path. This will result in dinit creating an abstract datagram socket and perform reads on it. As soon as the READY=1 datagram arrives, the readiness notification is signaled.
I'm in favor of using a wrapper. Besides what I've said in the original issue, the IMO, keeping the socket open for the lifetime of the process only helps if we want to try to trick programs that don't follow systemd's own guidelines on how to support a non-systemd use case. |
a wrapper is unreliable regardless, the process may never signal readiness and if an approach like dbus-wait-for is taken you'll end up with a forked off child process that will stay around forever, and having builtin support with a persistent socket adds a fairly minimal amount of code and no dependencies anyway (if that was possible for dbus, i'd prefer to make that builtin too, but it's not), plus it's cleaner and easier to use, and more universal (i've seen services that error when the env is set and the fd is open but the other end is closed) |
The service may specify this with ready-notification=socket. This will result in dinit creating an abstract datagram socket and perform reads on it. As soon as the READY=1 datagram arrives, the readiness notification is signaled.
Note that this is a very rough proof of concept implementation, e.g. the socket path is hardcoded and so on, I mostly just want to know your opinion on whether to do this at all or how to provide the interface and so on - once I know I should proceed and how, I will clean it up.
Also, as it is it carries a dependency on abstract sockets, which are a Linux extension. I wonder if something else should be done on other platforms, or if to make this an optional Linux-only functionality, or ...
I considered implementing this with a wrapper, but considering the mechanism uses datagram sockets and therefore there is no way to know what the lifetime of the watcher process should be, I decided not to. Doing it in dinit means the service manager knows about how long to keep the socket around.
Fixes #310