Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in readMIMEHeader #1650

Open
nuttert opened this issue Jun 16, 2024 · 4 comments
Open

Memory leak in readMIMEHeader #1650

nuttert opened this issue Jun 16, 2024 · 4 comments
Assignees
Labels
defect Suspected defect such as a bug or regression

Comments

@nuttert
Copy link

nuttert commented Jun 16, 2024

Observed behavior

image

I saw that my client used 67GB memory, first of all I checked pprof and got the result:
image

Also in client logs sometimes I see this: context deadline exceeded

Expected behavior

I expect that memory after processing a message will be released

Server and client version

Client libs:

[github.com/nats-io/nats.go](https://pkg.go.dev/github.com/nats-io/[email protected]) v1.34.1
[`github.com/nats-io/nats-server/v2`](https://pkg.go.dev/github.com/nats-io/nats-server/[email protected])

Server:

nats-server  --version
nats-server: v2.10.14

Host environment

I pinned my client to 2 cores and did not set any memory limits(the host has 400+GB memory) .
Server pinned to 8 cores and does not consumed a lot(Virt 13.5G RSS 3929M).

Steps to reproduce

I created a lot of subscriptions from the side of clients within 1 connection.

   {
      "kind": "Client",
      "type": "nats",
      "start": "2024-06-14T17:59:41.435315672Z",
      "last_activity": "2024-06-16T03:18:36.900453614Z",
      "rtt": "163µs",
      "uptime": "1d9h18m55s",
      "idle": "0s",
      "pending_bytes": 0,
      "in_msgs": 24556134,
      "out_msgs": 67034388,
      "in_bytes": 2380945880,
      "out_bytes": 29631620750,
      "subscriptions": 11610,
      "lang": "go",
      "version": "1.36.0"
    },

And after 2 days the memory increased despite that the consumers were passive(the did not send a lot of messages).

@nuttert nuttert added the defect Suspected defect such as a bug or regression label Jun 16, 2024
@piotrpio piotrpio self-assigned this Jun 17, 2024
@piotrpio
Copy link
Collaborator

Hey @nuttert, thanks for finding this, Ill let you know once we have a fix.

@nuttert
Copy link
Author

nuttert commented Jun 17, 2024

BTW here is how I create jetstream and subscription:

js, err := nc.JetStream()
............
if config.SyncHandler {
	jetSub, err = js.Subscribe(channel, func(msg *nats.Msg) {
		c.HandleMessage(ctx, msg, sub)
	}, opt...)
} else {
	jetSub, err = js.Subscribe(channel, func(msg *nats.Msg) {
		go c.HandleMessage(ctx, msg, sub)
	}, opt...)
}

@nuttert
Copy link
Author

nuttert commented Jul 23, 2024

Hey @piotrpio, any news? We have to restart our client once a week to flush out the memory allocated by the natsMsg....fortunately this is still a small service and we can afford it without affectation

@piotrpio
Copy link
Collaborator

Hey @nuttert, sorry for taking so long on this. I have a hard time reproducing it - I can get to a similar profile that you provided, but everything gets cleaned up nicely by GC. What sizes of payloads/headers are you usually consuming? Do you discard the messages after handling them (i.e. aren't they stored in memory)? And finally, which go version are you using?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
defect Suspected defect such as a bug or regression
Projects
None yet
Development

No branches or pull requests

2 participants