-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reverseproxy: Rewrite request buffering for fastcgi #6639
base: master
Are you sure you want to change the base?
Conversation
… length for fastcgi
@@ -43,7 +45,10 @@ func init() { | |||
// dial_timeout <duration> | |||
// read_timeout <duration> | |||
// write_timeout <duration> | |||
// capture_stderr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you meant to remove this
// buffering disabled | ||
if !c.buffer { | ||
c.rwc.Close() | ||
return newErrorResponse(http.StatusLengthRequired), nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, would it be better to throw a caddyhttp.Error instead? That way it could be caught by handle_errors
to render an error page etc. Or, at least an error type internal to this client that the transport.go could check for and convert to a caddyhttp.Error if we don't want the client.go to depend on anything in caddy.
// total disk storage allowed by the request body buffer | ||
FileBufferSizeLimit int64 `json:"file_buffer_size_limit,omitempty"` | ||
// the path to store the temporary files for the request body buffer | ||
FileBufferFilepath string `json:"file_buffer_filepath,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it be called "dir" instead? Cause this isn't exactly the "file" path since the filename is generated
// test if temporary file can be created | ||
file, err := os.CreateTemp(t.FileBufferFilepath, "caddy-fastcgi-buffer-") | ||
if err != nil { | ||
return fmt.Errorf("failed to create temporary file: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. Creating a file and throwing here could mean Caddy fails to start (or config reload). I wonder if that's a good thing or not. Maybe this test needs to happen at runtime instead, or maybe it should happen in Validate()
instead (still happens at startup but separated out)
// body_buffer_disabled | ||
// body_buffer_memory_limit <size> | ||
// file_buffer_size_limit <size> | ||
// file_buffer_filepath <path> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's ambiguous whether these configure the request or response body buffer.
// nginx doesn't have an option to limit the total file buffer size | ||
defaultFileBufferSize = 100 << 20 // 100MB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's good to have a default limit, but my first thought is could an evil client abuse this to do a storage exhaustion attack? They could open multiple requests and send garbage data in each of them but then "never finish" which would keep the storage locked up in each request. Say you only have 2GB of storage (reasonable amount for a VPS) then they'd only need to make like 20 requests that eat up 99MB to exhaust the storage. That could mean things like cert issuance fail to write to disk etc.
We'll need to think about how to mitigate things like that (slow loris style attacks). Maybe we need like a minimum bandwidth check or something like that which cancels the request if the speed of reading drops below a certain rate (1kb/s? Are there any legitimate requests that can dip that low?) Sounds pretty tricky to implement though.
Also, longer term we should think about generalizing the buffering so it could be used by any reverse_proxy transport. There are cases where it could be useful for some users, like being able to retry a POST on a different upstream (rewind and resend the body) or to tee it to two upstreams at once (e.g. to send it to a new version of an upstream to compare the result with the old one -- not sure if buffering is needed for that but it could help?) |
Thank you for getting a proper fix put together for this @WeidiDeng! ❤️ |
Is this superseded by #6661? If so, maybe we can close this. (Thanks for all your help in fixing the problem, Weidi!) |
It might be slightly more difficult to encourage Caddy use in the PHP space if buffering is completely off the table. The standard bearer reverse proxy in that space does buffer to the filesystem for FastCGI by default. But, I can also appreciate that buffering is much more complicated. |
Follow up on 6637. Now this patch will try to buffer request body for fastcgi requests by default with memory and file based buffers. The buffering behavior can be disabled and the buffer limits can be set as well.
This makes sure caddy can work out of the box when the requests don't have a content length, typically chunked encoding, without having to configure request buffering, which will buffer all requests and will not set
content-length
if the request can't be buffered in memory.content-length
is required when communicating with php-fpm.If buffering is disabled, response for requests without
content-length
will be411 Length Required
, typically for http1 requests that havechunked
encoding. If the buffer limit is reached, the response will be413 Request Entity Too Large
.By default, memory buffer is
16k
per request, and all file based buffers can only use100MB
of disk space. The buffer file path is dependent on the system, and caddy wil fail to start if caddy can't create files in this directory.The new configuration parameters are following:
Another fix is that now
content-length
is checked when sending the request to php-fpm, so that request size is limited to prevent client to send faultycontent-length
.