-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix issue with context being cancelled prematurely #70
Conversation
var checkClosers []io.Closer | ||
defer func() { | ||
for _, c := range checkClosers { | ||
_ = c.Close() | ||
} | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was to prevent risk of deadlock. We don't want to be holding b.mu
when calling check.Close()
.
@@ -331,12 +337,6 @@ func (b *balancer) initConnInfoLocked(conns []conn.Conn) { | |||
connection := conns[i] | |||
connCtx, connCancel := context.WithCancel(b.ctx) | |||
healthChecker := b.healthChecker.New(connCtx, connection, b) | |||
go func() { | |||
defer connCancel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was the culprit. This cancellation is not needed. I also moved this block to the end since it doesn't really need to be kicked off here, and I thought it was slightly confusing to break up the rest of synchronous flow with this in the middle.
context.AfterFunc(ctx, func() { | ||
// Automatically force state to unhealthy after context is cancelled. | ||
tracker := hc.updateHealthState(connection, health.StateUnhealthy, true) | ||
if tracker == nil { | ||
return | ||
} | ||
tracker.UpdateHealthState(connection, health.StateUnhealthy) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes the test health checker context-aware, which reproduced the originally reported issue.
if tracker != nil { | ||
tracker.UpdateHealthState(connection, state) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We separate this step from the other updates (in helper below) so that we are not holding hc.mu
while calling into the tracker. This was the other side of the changes to eliminate possible deadlock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
There was a bug where we would cancel the health-checker's context after warming up the connection, which means that health checkers that rely on the context would start permanently failing after the first check.
To reproduce, I updated the
FakeHealthChecker
used in tests to be context-sensitive: it usescontext.AfterFunc
to permanently mark a connection as unhealthy if/when the context is cancelled.One of my local test runs then hung, revealing a potential deadlock bug. The issue was that the balancer would acquire a lock and then call
check.Close()
. For theFakeHealthChecker
, closing then acquires the health checker's lock. But over in theFakeHealthChecker.UpdateHealthState
, we acquire locks in the reverse order: acquiring the health checker's lock and then callingtracker.UpdateHealthState
(which ends up calling into the balancer and acquiring the balancer's lock). Having the lock acquisition order differ exposes a potential deadlock.I fixed the deadlock by making it so that we don't ever try to acquire both locks at the same time, much less acquire them in different orders.
check.Close()
while holding a lock.FakeHealthChecker
, we no longer call into the tracker while holding a lock.Resolves #69.