-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recreate listener if error is occured #236
Conversation
Signed-off-by: clyang82 <[email protected]>
dba9b3d
to
d21b267
Compare
func (f *Default) NewListener(ctx context.Context, channel string, callback func(id string)) *pq.Listener { | ||
logger := ocmlogger.NewOCMLogger(ctx) | ||
|
||
listener := newListener(ctx, f.config, channel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about new the listener in the goroutine waitForNotification? we can maintain the listener in one place, may like
func waitForNotification() {
listener := newListener(ctx, f.config, channel)
for {
// ...
// recreate the listener
listener = newListener(ctx, dbConfig, channel)
}
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if encapsulate the listener into waitForNotification
, we cannot get the listener in the integration test. without the listener we cannot close the listener to generate an errListenerClosed to test the listener re-creation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok ...
may add some comments, the returned listener only for test purpose, it cannot be shared by others?
Signed-off-by: clyang82 <[email protected]>
/retest |
Signed-off-by: clyang82 <[email protected]>
logger.Infof("recreate the listener due to %s", err.Error()) | ||
l.Close() | ||
// recreate the listener | ||
l = newListener(ctx, dbConfig, channel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will this be a problem? it seems we are changing a shared variable without lock. What happened when another is using the listener, and it is changed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if reaching here, it means the listener has problem. it cannot be used anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens when a caller calls the listener at the same time, will it panic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the listener is initialized when the maestro server is starting.
maestro/cmd/maestro/server/controllers.go
Line 61 in e28d043
go env().Database.SessionFactory.NewListener(ctx, "events", s.KindControllerManager.AddEvent) |
so there is no new caller to call the listener. we leverage LISTEN/NOTIFY mechanism to sends a notification event together with an optional “payload” string to listener that has previously executed LISTEN channel.
…e6d51af797e6d4b This include the following notable changes; - purge status events (openshift-online/maestro#232) - update sdk to support using two topics for kafka (openshift-online/maestro#238) - multiple instances support for grpc broker. (openshift-online/maestro#235) - Recreate listener if error is occured (openshift-online/maestro#236) - fix maestro resource status hash getter. (openshift-online/maestro#245) The last two contain fixes needed by ARO-HCP as well.
…e6d51af797e6d4b This include the following notable changes; - purge status events (openshift-online/maestro#232) - update sdk to support using two topics for kafka (openshift-online/maestro#238) - multiple instances support for grpc broker. (openshift-online/maestro#235) - Recreate listener if error is occured (openshift-online/maestro#236) - fix maestro resource status hash getter. (openshift-online/maestro#245) The last two contain fixes needed by ARO-HCP as well.
…e6d51af797e6d4b (#1087) This include the following notable changes; - purge status events (openshift-online/maestro#232) - update sdk to support using two topics for kafka (openshift-online/maestro#238) - multiple instances support for grpc broker. (openshift-online/maestro#235) - Recreate listener if error is occured (openshift-online/maestro#236) - fix maestro resource status hash getter. (openshift-online/maestro#245) The last two contain fixes needed by ARO-HCP as well.
recreate the listener when the notify is nil or there is an error for Ping()
here is the document for notify is nil