-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory not released #344
Comments
Hi @Zabrane Thanks for the report, I will take a look. Could you share some details of the benchmark you ran? Is this a handshake oriented or a throughput oriented test? HTTP kee-alive? Number of clients/request rate? Also, is there anything else special about your config? Could you perhaps share your hitch command line and hitch.conf? |
Hi @daghf Thanks for taking the time to look at this.
$ unzip -a srv.js.zip
$ npm install express
$ node srv.js
::: listening on http://localhost:7200/
## Listening
frontend = "[0.0.0.0]:8443"
## https://ssl-config.mozilla.org/
ciphers = "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
tls-protos = TLSv1.2
## TLS for HTTP/2 traffic
alpn-protos = "http/1.1"
## Send traffic to the backend without the PROXY protocol
backend = "[127.0.0.1]:7200"
write-proxy-v1 = off
write-proxy-v2 = off
write-ip = off
## List of PEM files, each with key, certificates and dhparams
pem-file = "hitch.pem"
## set it to number of cores
workers = 10
backlog = 1024
keepalive = 30
## Logging / Verbosity
quiet = on
log-filename = "/dev/null"
## Automatic OCSP staple retrieval
ocsp-verify-staple = off
ocsp-dir = "" Then, run it: $ hitch -V
hitch 1.7.0
$ hitch --config=./hitch.conf
$ curl -k -D- -q -sS "https://localhost:8443/" --output /dev/null
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 6604
Date: Tue, 22 Dec 2020 12:01:33 GMT
Connection: keep-alive Finally, run it like this: $ echo "GET https://localhost:8443/" | vegeta attack -insecure -header 'Connection: keep-alive' -timeout=2s -rate=1000 -duration=1m | vegeta encode | vegeta report
Requests [total, rate, throughput] 60000, 1000.02, 1000.02
Duration [total, attack, wait] 59.999s, 59.999s, 219.979µs
Latencies [min, mean, 50, 90, 95, 99, max] 165.935µs, 262.688µs, 230.6µs, 333.352µs, 375.975µs, 502.351µs, 16.373ms
Bytes In [total, mean] 396240000, 6604.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:60000
Error Set: During the stress test with $ sudo su
root$ ps_mem.py -p `pgrep -d, hitch | sed -e 's|,$||'`
root$ watch -n 3 "ps_mem.py -p `pgrep -d, hitch | sed -e 's|,$||'`" You can set vegeta's Please let me know if you need anything else. NOTE: on MacOS, |
Hi @daghf and Happy New Year. Any update on this :-) ? |
Hi @Zabrane I haven't had any luck in reproducing this. Even trying to set up something identical to your setup (Ubuntu 20.04, gcc9.3, openssl 1.1.1f), and running vegeta with your I did find a few inconsequential small memory leaks relating to a config file update, which I fixed in a commit just pushed. However, these are not the kind of memory leaks that would incur growing memory usage relating to traffic or running time. |
@daghf thanks for your time looking at this issue. We are still seeing this behaviour in 2 different products behind One last question before i close this issue if you don't mind: if the Thanks |
Have the same problem here; hitch is currently taking up to 24GB of ram until it was killed (Out of memory: Kill process # (hitch) score 111 or sacrifice child. |
@robinbohnen thanks for confirming the issue. We still suffer from the memory problem and the current workaround is to manually kill/restart We consider switching to stunnel 5.58, haproxy 2.3 or envoy 1.17. caveat: the |
@Zabrane , since we are having trouble reproducing the issue, could you try either sharing some |
@gquintard we use 1 Certificate and 1 CA as explained above. Unfortunately, we don't rely on Docker for our services. It took us 6-weeks to able to report the issue here ( get approval from business - we work for a private bank). @robinbohnen could you please shed more lights on your config? |
We have about 3500 LetsEncrypt certificates served by Hitch, we don't use Docker as well. |
I think what @gquintard was asking is rather, can you reproduce this behavior in a docker or vagrant (or maybe other) setup that we could duplicate on our end to try to observe it as well? |
FWIW, we observed something similar. In our case we had 300-500K concurrent connections, when the connection count dropped RSS continued to increase, until stablizing around 90GB. After trying a variety of adjustments we ended up loading jemalloc via I don't have a firm explanation, but it does remind me a bit of this post where it's theorized that the excess memory usage of libc malloc involved fragmentation caused by multithreading. I'm not sure if that would apply in hitch. |
Hi guys,
I'm facing the same issue using
Hitch 1.7.0
onUbuntu 20.04 LTS
.While stress testing (with vegeta) our backend app which sits behind Hitch, we noticed that Hitch's memory never gets released back to the system.
This is Hitch's memory usage before starting the benchmark (using ps_mem.py to track memory usage)
And this is Hitch's memory usage when the benchmark was done:
The memory is still not released yet (24h later).
My config:
20.04 LTS
1.7.0
1.1.1f
9.3.0
The text was updated successfully, but these errors were encountered: