-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Internal logging improvements #205
Conversation
The logger was kind of "bolted on" to the client class -- when other parts of the code wanted to use it, they needed to fetch the logger with the same name separately and deal with it as a variable. Move this to a separate `internal_logger` module which is initialised on-demand from the existing initialised (not necessarily started!) global client.
✔️ All good! |
5133545
to
8897d02
Compare
@tombruijn Thanks, that makes sense. Restoring those to "debug". Perhaps my thinking about "trace" and "debug" is wrong? The way I think about it is that "debug" is something that you could enable in production for a not-too-short period of time to figure out an issue. As such I am trying to avoid "spam" of the "Did thing successfully!" variety that would clog the logs -- especially with probes, where it would happen once per minute. |
My thoughts about it are, that things that are useful for us and users to see in the logs to debug issues are useful as debug logs. That can include "did the thing", if we want to know if the thing was done, like adding instrumentations if we don't see any spans/metrics from them, or if an event (heartbeat, breadcrumb, probe) was sent/triggered or not. Sometimes, not seeing an error about a thing not working, is not good enough if we can't tell if the thing actually was run/started or not. The trace messages log internal messages, things like received OpenTelemetry export requests, requests made to the Push API and public endpoint. That's the debug logging we want to see when we're debugging something deep in the integration or agent, without having to 1) add a new log message to the agent, 2) build a new agent, 3) add that custom agent build to the app, every time. I also wrote down some short notes about this when I changed some log levels in the agent some time back. https://github.com/appsignal/appsignal-agent/pull/1072 In this example for probes, I think it's useful to log which probe is run as a debug message. That's because we don't log the collection of probes when the probe system starts. If we would log which probes are active at the start, we could log those as a debug message, and (if needed) log which probes run every minute as a trace mesasge. (Keep that in mind that if a probe gets added after the probe system is started, I would want to see that new probe also being logged as a debug message when it gets added.) |
@tombruijn This is very useful, thank you! I'll maybe see about adding the probes logging you describe as well. (Maybe not, it can be done later) |
I wouldn't bother with it now. Let's do that in a separate PR? |
Make it possible to meaningfully log "trace" logging level messages (meaning they're logged on trace log level, and not logged on debug log level) like in our other integrations.
When logging level is set to "trace", log all OpenTelemetry spans to the console. This is similar to what the Node.js integration does with the OpenTelemetry diagnostics, which does not seem to have a direct equivalent in Python's OpenTelemetry implementation.
Do not log "Instrumenting ..." when attempting to start an instrumentation (the required dependency for which might not actually be installed) Instead, log "Instrumented ..." when an instrumentation is correctly started.
8897d02
to
7c627e6
Compare
Found myself needing to do some of this (the OpenTelemetry part, specifically, for the SQLAlchemy instrumentation) and wishing (earlier on, for the heartbeats and probes) that I did not have to initialise a logger instance in order to log something, and that a genuine trace-level logging option was available.
[skip changeset] because none of this really changes anything for customers (who should not be using
"trace"
)Move internal logger to separate module
The logger was kind of "bolted on" to the client class -- when other
parts of the code wanted to use it, they needed to fetch the logger
with the same name separately and deal with it as a variable.
Move this to a separate
internal_logger
module which is initialisedon-demand from the existing initialised (not necessarily started!)
global client.
Implement trace logging level
Make it possible to meaningfully log "trace" logging level messages
(meaning they're logged on trace log level, and not logged on debug
log level) like in our other integrations.
Log OpenTelemetry spans on trace
When logging level is set to "trace", log all OpenTelemetry spans to
the console.
This is similar to what the Node.js integration does with the
OpenTelemetry diagnostics, which does not seem to have a direct
equivalent in Python's OpenTelemetry implementation.
Log instrumentation only when successful
Do not log "Instrumenting ..." when attempting to start an
instrumentation (the required dependency for which might not actually
be installed)
Instead, log "Instrumented ..." when an instrumentation is correctly
started.