Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bugzilla 24524: Very slow process fork if RLIMIT_NOFILE is too high #8990

Closed
wants to merge 5 commits into from

Conversation

trikko
Copy link
Contributor

@trikko trikko commented Apr 27, 2024

When the soft limit for NOFILE is set to a high number, the current code for spawnProcess, pipeProcess, etc. becomes very slow and memory intensive.

This is particularly evident when running a D application inside a Docker container. Docker sets the soft limit to the maximum allowed, which on some systems can be as high as 2^30.

This code reads the list of file descriptors in use from /dev/fd or /proc/this/fd, avoiding the need to scroll through the entire list of possible file descriptors.

@dlang-bot
Copy link
Contributor

dlang-bot commented Apr 27, 2024

Thanks for your pull request and interest in making D better, @trikko! We are looking forward to reviewing it, and you should be hearing from a maintainer soon.
Please verify that your PR follows this checklist:

  • My PR is fully covered with tests (you can see the coverage diff by visiting the details link of the codecov check)
  • My PR is as minimal as possible (smaller, focused PRs are easier to review than big ones)
  • I have provided a detailed rationale explaining my changes
  • New or modified functions have Ddoc comments (with Params: and Returns:)

Please see CONTRIBUTING.md for more information.


If you have addressed all reviews or aren't sure how to proceed, don't hesitate to ping us with a simple comment.

Bugzilla references

Auto-close Bugzilla Severity Description
24524 enhancement Very slow process fork if RLIMIT_NOFILE is too high

Testing this PR locally

If you don't have a local development environment setup, you can use Digger to test this PR:

dub run digger -- build "master + phobos#8990"

@CyberShadow
Copy link
Member

CyberShadow commented Apr 27, 2024

  1. I think this code is very Linux-specific. std.process should be POSIX-compatible.
  2. There is no guarantee that procfs is mounted.

Edit: I see now it falls back to the old code.

@trikko
Copy link
Contributor Author

trikko commented Apr 27, 2024

1 - I think that /dev/fd exists in many unix-based os. Maybe /proc/this/fd is linux specific.
2 - That's the reason why I first check /dev/fd

Please notice that this code could be simplified if scandir were imported on dirent.

@CyberShadow
Copy link
Member

I see this adds a third method of doing the same thing (/proc/fd -> poll -> for loop).

Has anyone checked how other language runtimes handle this (besides being consistent about CLOEXEC)?

@trikko
Copy link
Contributor Author

trikko commented Apr 27, 2024

Some related examples:

libreoffice use a function from dirent.h not available on druntime: dirfd that tells you which fd are you using to browse the directory (so you can exclude it and avoid closing the dir).

Not sure how much dirent.h is a standard, but with that function we could avoid the list and all the mallocs/frees. Probably just a over-optimization.

@trikko
Copy link
Contributor Author

trikko commented Apr 27, 2024

Even docker and podman use this way to close fd:
https://github.com/docker/docker-ce/blob/master/components/engine/daemon/graphdriver/devmapper/deviceset.go#L1265
https://github.com/containers/podman/blob/main/libpod/container_top_linux.go#L141

@CyberShadow
Copy link
Member

Even docker and podman use this way to close fd:

Great research! This is good evidence that we're on the right track.

libreoffice use a function from dirent.h not available on druntime: dirfd that tells you which fd are you using to browse the directory (so you can exclude it and avoid closing the dir).

Not sure how much dirent.h is a standard, but with that function we could avoid the list and all the mallocs/frees. Probably just a over-optimization.

It seems to be in POSIX:

https://pubs.opengroup.org/onlinepubs/9699919799.2013edition/functions/dirfd.html

I think that means it should be safe to use. It can be added to Druntime in parallel to a temporary private extern(C) declaration in Phobos.

Copy link
Member

@CyberShadow CyberShadow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving now (thanks!), but if the dirfd allows getting rid of the linked list, that would be even better.

@trikko
Copy link
Contributor Author

trikko commented Apr 27, 2024

https://en.wikibooks.org/wiki/C_Programming/POSIX_Reference/dirent.h

Here they say it's a "pseudo-standard": I'm not sure that every C library implements that function. Is there a way to understand when these functions were added to posix standard? Maybe druntime is relying on a older posix standard?

I don't want to mess everything up!

@CyberShadow
Copy link
Member

Is there a way to understand when these functions were added to posix standard?

Bottom of that page says issue 7, which seems to have been released in 2017.

I don't want to mess everything up!

Well, that's what CI is for :)

But another way to approach this is to go through the list of OSes which are supported by DMD (which aren't many). GDC and LDC carry compatibility patches for lots of things anyway, I believe.

Yet another way is to use static if and use this function only if it's declared in Druntime for the current OS. Then, it can be added only for OSes which are known to have it implemented in their libc. This is done in std.file to detect the current platform's subsecond precision API.

@thewilsonator
Copy link
Contributor

please change the commit title to Fix bugzilla 24524: [description of issue] so the bot picks it up.

@trikko trikko changed the title Fix issue #24524 Fix bugzilla 24524: Very slow process fork if RLIMIT_NOFILE is too high Apr 27, 2024
@trikko
Copy link
Contributor Author

trikko commented Apr 27, 2024

I wonder if I should still leave the old method in the case of low limits. At that point it's probably faster to iterate and close all the fds and that's it, rather than reading a list from "/dev/fd".

I guess it also depends on how many files are open. If the maximum is 1000 and all 1000 are open, obviously the old way will work better.

But if only a couple of files are open the new method wins.

In any case if the limit is too high, the old methods blows up. On macOS the hard limit is set to "unlimited" and this is really dangerous if someone set a high softlimit :)

@thewilsonator
Copy link
Contributor

commit message title, not PR title

std/process.d Outdated
}
foreach (i; 0 .. maxToClose)
// Try to open the directory /dev/fd or /proc/self/fd
DIR* dir = opendir("/dev/fd");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is Posix code, right? Does this exist for all posix platforms? BSDs, OSX, Cygwin, etc.

std/process.d Show resolved Hide resolved
@schveiguy
Copy link
Member

Hm... I was rather thinking the getrlimit would be the test, and then only if it's over a certain limit would it make sense to use the /proc/fd read.

The poll mechanism has the advantage that is is only one syscall for the entire "read" operation.

@ibuclaw
Copy link
Member

ibuclaw commented Apr 28, 2024

@trikko
Copy link
Contributor Author

trikko commented Apr 28, 2024

Hm... I was rather thinking the getrlimit would be the test, and then only if it's over a certain limit would it make sense to use the /proc/fd read.

The poll mechanism has the advantage that is is only one syscall for the entire "read" operation.

... and the disadvantage that allocates one struct for each possible fd so it becomes slow and eats memory.

Which is the right limit?

@trikko
Copy link
Contributor Author

trikko commented Apr 28, 2024

I have done some tests with a ulimit starting from 1_000_000 and going down to 100_000, in 10 steps. For each step, I tested the performance with 0, 1000, …, 9000 open file descriptors. The result is attached.

Things to keep in mind:

In some cases, the limit is much higher than 1,000,000 and can even reach up to 4000 times as much, even if the user then opens only one file.
How many files does a user normally open?
Having said that, it is necessary to find a balance between the two methods, deciding when to use one or the other based on the ulimit. Any ideas?

results.txt

@CyberShadow
Copy link
Member

Having said that, it is necessary to find a balance between the two methods, deciding when to use one or the other based on the ulimit. Any ideas?

How about benchmarking the various methods, seeing which one is faster in which circumstances, and choosing the implementation to use based on that?

We can use forkPipeOut as a rough estimate of the number of currently-open file descriptors.

@trikko
Copy link
Contributor Author

trikko commented Apr 28, 2024

Having said that, it is necessary to find a balance between the two methods, deciding when to use one or the other based on the ulimit. Any ideas?

How about benchmarking the various methods, seeing which one is faster in which circumstances, and choosing the implementation to use based on that?

We can use forkPipeOut as a rough estimate of the number of currently-open file descriptors.

Even if POSIX says you must assign the smaller unused FD to a new process, it could be a good rough estimate. Or maybe we can open some "/dev/null" to fill some holes. :)

Benchmark: did you se the result.txt file? Am I missing something?

@CyberShadow
Copy link
Member

Benchmark: did you se the result.txt file? Am I missing something?

Right - will the dumb for loop always be slower than poll, then?

If so, I guess the rule would be something like if (fd_limit / 10000 < nr_open_fds / 300) { use poll } else { use "/dev/fd" iteration }?

@trikko
Copy link
Contributor Author

trikko commented Apr 28, 2024

With few FD it wins. That probably it is the common case. How many software did you write with more than a dozen of FDs open at the same time? 🙂

@schveiguy
Copy link
Member

Yes I get it. I'm saying, the threshold can be much much lower, like 8k. 100k is too much, let alone 1 million, I don't think we should be using poll at that point. I'm sure the poll version beats the version that opens the directory with only 8k file descriptors, even with a small number of open files.

If we make the threshold 8k, it will use poll on most normal systems, and be fast, and use the dev filesystem above that and be fast on e.g. docker.

@trikko
Copy link
Contributor Author

trikko commented Apr 30, 2024

Yes I get it. I'm saying, the threshold can be much much lower, like 8k. 100k is too much, let alone 1 million, I don't think we should be using poll at that point. I'm sure the poll version beats the version that opens the directory with only 8k file descriptors, even with a small number of open files.

If we make the threshold 8k, it will use poll on most normal systems, and be fast, and use the dev filesystem above that and be fast on e.g. docker.

I think that values over 8k are not that uncommon. Please consider that for example machines with mongo usually has 64k limit. See the "reccomanded ulimit settings" section here.

Don't you like the euristic way to determine which method to use?

I can of course replace this:

 if (
    r.rlim_cur/(forkPipeOut+1) > 120 ||     // ... the number of file descriptors is small ...
    r.rlim_cur > 1024*1024                  // ... or the soft limit is high. In this case poll would allocate a huge array)
)

Simply with:

if (r.rlim_cur > 8*1024) 

Is this what you mean?

@trikko
Copy link
Contributor Author

trikko commented Apr 30, 2024

If we make the threshold 8k, it will use poll on most normal systems, and be fast, and use the dev filesystem above that and be fast on e.g. docker.

Test with limit == 10k, 9k, 8k, ...

The dumb method seems to be still faster than poll with ~ limit/open > 120 (stdin/out/err just raise count to 3)

(anyway we can set a "poll only" zone for limit < 20k or something like that, but I would keep the euristic algo)

test10000.txt

@schveiguy
Copy link
Member

Can you post what the code you use for testing is? Is the test including the fork/exec? My interest is only to test the closing of file descriptors, not the other stuff.

@trikko
Copy link
Contributor Author

trikko commented Apr 30, 2024

Just a stupid script, using a cloned version of std.process:

import std.stdio;
import std.conv;
import std.datetime : Clock;


import core.stdc.stdio : fopen, fread;
import std.datetime : SysTime;

void main()
{
	import core.sys.posix.sys.resource : rlimit, getrlimit, RLIMIT_NOFILE, setrlimit;


	// Get the maximum number of file descriptors that could be open.
	rlimit r;
	getrlimit(RLIMIT_NOFILE, &r);

	r.rlim_cur = 10000;
	setrlimit(RLIMIT_NOFILE, &r);


	for(size_t i = 0; i <= 9000; i+=1000)
	{
		r.rlim_cur = 10000 - i;
		stderr.writeln("\n\n--- Maximum number of file descriptors: ", r.rlim_cur);
		setrlimit(RLIMIT_NOFILE, &r);

		for(size_t k = 0; k < 10; ++k)
		{
			immutable file_to_open = 80*k;
			stderr.writeln("\n  - FDs open: ", file_to_open);
			stderr.writeln("  - Starting spawnProcess(`echo`); 10 times...");

			FILE *[] fds;
			foreach(x; 0 .. file_to_open)
			{
				fds ~= fopen("/dev/urandom", "r");
			}

			SysTime c;

			c = Clock.currTime;
			foreach(_; 0 .. 10)
			{
				import std.process: spawnProcess;
				spawnProcess("echo");
			}
			stderr.writeln("Time (poll): ", Clock.currTime - c);

			c = Clock.currTime;
			foreach(_; 0 .. 10)
			{
				import pro2: spawnProcess;
				spawnProcess("echo");
			}
			stderr.writeln("Time (/dev/fd): ", Clock.currTime - c);

			foreach(fd; fds)
			{
				fclose(fd);
			}

			fds = null;
		}
	}


}

@schveiguy
Copy link
Member

OK, so the timings include the actual fork/exec of the other process. I think I will try and build a test to do just the fd closing.

@trikko
Copy link
Contributor Author

trikko commented Apr 30, 2024

OK, so the timings include the actual fork/exec of the other process. I think I will try and build a test to do just the fd closing.

That's why I call the same process for 10 times.

@trikko
Copy link
Contributor Author

trikko commented Apr 30, 2024

This works with a simplified logic:

over 128*1024 decriptors => dumb way
less than 128*1024 descriptors => poll
fallback => very dumb

Copy link
Member

@schveiguy schveiguy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, this looks like what I was expecting, thanks!

std/process.d Outdated Show resolved Hide resolved
@schveiguy
Copy link
Member

/home/runner/work/phobos/dmd/compiler/test/test_results/runnable/d/testthread_0: undefined symbol: _D4core6thread8osthread6Thread5sleepFNbNiNeSQBq4time8DurationZv

How is the tester not finding some thread symbol? Nothing is changing in druntime...

@ibuclaw
Copy link
Member

ibuclaw commented May 4, 2024

/home/runner/work/phobos/dmd/compiler/test/test_results/runnable/d/testthread_0: undefined symbol: _D4core6thread8osthread6Thread5sleepFNbNiNeSQBq4time8DurationZv

How is the tester not finding some thread symbol? Nothing is changing in druntime...

Seems to be introduced by #8992

@CyberShadow

This comment was marked as outdated.

@ibuclaw
Copy link
Member

ibuclaw commented May 4, 2024

Seems to be introduced by #8992

That doesn't look right. It's not even on the same branch. Did you mean to link to a PR in another repo?

If I understand right, it introduces a second version of the compiler to the ci environment.

Reverting it fixes the phobos pipelines.

https://github.com/dlang/phobos/actions/workflows/main.yml?query=branch%3Amaster

@CyberShadow
Copy link
Member

Oops, you're right!

@LightBender LightBender added the Review:Salvage This PR needs work, but we want to keep it and it can be salvaged. label Oct 27, 2024
@trikko
Copy link
Contributor Author

trikko commented Oct 28, 2024

I ran into this bug again, in a different form. This time I installed Ubuntu 24.10, and both dub and serve-d are extremely slow inside VSCode and consume several GB of RAM each. This is because ulimit -n inside VSCode returns a huge number. (The Ubuntu terminal, on the other hand, says 1024.)

@trikko
Copy link
Contributor Author

trikko commented Oct 28, 2024

The worst thing is that, if I'm not mistaken, the tools aren't bootstrapped, so if this fix is merged, the new dub will still be compiled with the old version, and the bug will remain. I hope they either bootstrap it or, at least, add a workaround in the dub code (and other tools?) to set a maximum limit for open files, as I did here for serverino: https://github.com/trikko/serverino/blob/05c89790669fa12716918373bf7fdbfff8b34b78/source/serverino/main.d#L129

@kinke
Copy link
Contributor

kinke commented Oct 29, 2024

@trikko: Could you please rebase? Looks like this should have been merged half a year ago, but hit some CI issue. I'd be okay with targeting stable, so that this still lands in v2.110 (the sooner the better IMO).

@trikko
Copy link
Contributor Author

trikko commented Oct 29, 2024

Sorry Martin, but another pull request was accepted after a few months and has partially solved the issue, which is probably sufficient in most cases. Rather than a rebase, I would need to review the code, create a new pull request, and get it approved again. If anyone wants to pick up the code, they're welcome to do so.

@kinke
Copy link
Contributor

kinke commented Oct 29, 2024

Ah alright, thx for explaining.

@schveiguy
Copy link
Member

superseded by #9077

@schveiguy schveiguy closed this Oct 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merge:Needs Rebase Review:Salvage This PR needs work, but we want to keep it and it can be salvaged. Severity:Enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants