-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About ClientPolicy.MinConnectionsPerNode vs server's proto-fd-idle-ms #374
Comments
You being on v1.x is the bigger news to me :) |
Yes, it's retried. Let me provide more details:
func (clnt *Client) Get(policy *BasePolicy, key *Key, binNames ...string) (*Record, error) {
I0520 02:44:42.754393 command.go:2169] [aerospike] Node BB9353C99F5580E 10.y.y.y:3000: write tcp 10.x.x.x:57552->10.y.y.y:3000: write: broken pipe
I0520 02:44:42.755625 command.go:2169] [aerospike] Node BB90D6223B88B0A 10.y2.y2.y2:3000: write tcp 10.x.x.x:54522->10.y2.y2.y2:3000: write: broken pipe
I0520 02:44:42.756857 command.go:2169] [aerospike] Node BB9353C99F5580E 10.y3.y3.y3:3000: write tcp 10.x.x.x:33098->10.y3.y3.y3:3000: write: broken pipe Then the last |
@khaf any further comment? |
I think I understand the issue, I just need to implement the solution. ETA this week. |
@khaf thanks so much. Looking forward to good news. |
@khaf any update on this thread? We recently encountered another issue because of this. IssueAs cluster has 48 nodes. One client sets ClientPolicy.MinConnectionsPerNode to 100. So the expected connections to as cluster should be 4800. And we foundation nodeStats.ConnectionsOpen is 4800. AnalysisWe set min-conns to handle peak traffic. For general traffic, we don't need so many conns, and partial conns will be idle-timeout. But these partial conns will have no chance to be closed, even if idle-timeout, because of min-conns. Only for the next traffic peak, these conns will be touched, but maybe with aerospike access error or trigger creating many new conns. Proposal
func (h *connectionHeap) DropIdle() {
for i := 0; i < len(h.heaps); i++ {
for h.heaps[i].DropIdleTail() {
}
}
} |
Sorry another high priority issue popped up that I had to take care of. I'll release the fix to this issue tomorrow. |
@khaf thanks for the update. We are using both v4.x and v5.x clients. Looking forward to the release. |
I haven't forgotten about this ticket. The solutions that I came up with this week have not been satisfactory. Since this is an important feature of the client and I'll have to port it forward to the newer versions (and quite possibly to other clients as well), I need to take my time. But this is my current active issue, so will solve it before moving to other work. |
@khaf Yes, we're using Go mod. BTW, it seems v4 branch currently works well with Go mod. |
OK, after writing and testing the code for this for the client, it turned out in an engineering meeting that the correct resolution for this issue is to set the server idle time to 0 to avoid reaping the connections from the server side. Please let me know how that goes. |
Closing this. Feel free to reopen or issue a new ticket in case you had further question. |
@khaf We just reencountered this issue recently. Previously we just disabled conn-pool in not-so-busy environments. I understand the idea of setting the server idle timeout to 0. Is it possible to re-evaluate the solution? BTW, if it still prefers to current solution, is it possible to add a comment to IdleTimeout or MinConnectionsPerNode, to avoid other misunderstandings? Thanks |
I reopened the issue, so that I can track it. I don't know if the product or engineering will accept any change or addition to the current solution, but I'll bring it up. For the comments on ps: What version of the client are you currently on? |
@khaf sorry for the late reply. For the comment, like And my APP version is v4.5.0. |
@khaf any update for this thread? |
Any update here? I'm running client 6.13 and see number of connections drop below MinConnectionsPerNode. |
Client
Issue
Here is the drop-idle-connections logic.
aerospike-client-go/connection_heap.go
Line 237 in b936c14
For example, if we set ClientPolicy.MinConnectionPerNode to 5, these 5 idle connections may have no chance to check if it is idle or already timeout. Of course, This case is under not busy Aerospike access scenario.
For these idle conns, if it already exceeds server's proto-fd-idle-ms, Aerospike server may proactively close the conn and client conn hasn't realized that. Then the client will get such error
Node BB9353C99FXXX 10.y.y.y:3000: write tcp 10.x.x.x:46554->10.y.y.y:3000: write: broken pipe
Question
Is it possible to fully iterate the conns pool and clean up the idle ones?
The text was updated successfully, but these errors were encountered: