-
Notifications
You must be signed in to change notification settings - Fork 648
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix missing wsgi dependency #1578
Conversation
@@ -105,7 +105,7 @@ commands_pre = | |||
distro: pip install {toxinidir}/opentelemetry-distro {toxinidir}/opentelemetry-distro | |||
instrumentation: pip install {toxinidir}/opentelemetry-instrumentation | |||
|
|||
getting-started: pip install -e {toxinidir}/opentelemetry-instrumentation -e {toxinidir}/opentelemetry-python-contrib/instrumentation/opentelemetry-instrumentation-requests {toxinidir}/opentelemetry-python-contrib/util/opentelemetry-util-http -e {toxinidir}/opentelemetry-python-contrib/instrumentation/opentelemetry-instrumentation-flask | |||
getting-started: pip install -e {toxinidir}/opentelemetry-instrumentation -e {toxinidir}/opentelemetry-python-contrib/instrumentation/opentelemetry-instrumentation-requests {toxinidir}/opentelemetry-python-contrib/util/opentelemetry-util-http -e {toxinidir}/opentelemetry-python-contrib/instrumentation/opentelemetry-instrumentation-wsgi -e {toxinidir}/opentelemetry-python-contrib/instrumentation/opentelemetry-instrumentation-flask |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure why it isn't failing but tracecontext test also needs wsgi as dependency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point... i wonder if it's using the tox cache which would have the dependency in it already, maybe this would explain why this other tox target didn't fail before either
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually looking at the tracecontext job, i don't see it requiring the flask instrumentation, am i missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't need flask instrumentation. It requires OpenTelemetryMiddleware
from opentelemetry-instrumentation-wsgi
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To give more context OpenTelemetryMiddleware
doesn't exist in util-http
and ideally this import should fail;
opentelemetry-python/tests/w3c_tracecontext_validation_server.py
Lines 33 to 49 in 731f32c
from opentelemetry.util.http.wsgi import OpenTelemetryMiddleware | |
# FIXME This could likely be avoided by integrating this script into the | |
# standard test running mechanisms. | |
# Integrations are the glue that binds the OpenTelemetry API and the | |
# frameworks and libraries that are used together, automatically creating | |
# Spans and propagating context as appropriate. | |
trace.set_tracer_provider(TracerProvider()) | |
RequestsInstrumentor().instrument() | |
# SpanExporter receives the spans and send them to the target location. | |
span_processor = SimpleExportSpanProcessor(ConsoleSpanExporter()) | |
trace.get_tracer_provider().add_span_processor(span_processor) | |
app = flask.Flask(__name__) | |
app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app) |
no idea how it is working, as mentioned earlier probably some cache but needs some investigation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To give more context
OpenTelemetryMiddleware
doesn't exist inutil-http
and ideally this import should fail;
opentelemetry-python/tests/w3c_tracecontext_validation_server.py
Lines 33 to 49 in 731f32c
from opentelemetry.util.http.wsgi import OpenTelemetryMiddleware # FIXME This could likely be avoided by integrating this script into the # standard test running mechanisms. # Integrations are the glue that binds the OpenTelemetry API and the # frameworks and libraries that are used together, automatically creating # Spans and propagating context as appropriate. trace.set_tracer_provider(TracerProvider()) RequestsInstrumentor().instrument() # SpanExporter receives the spans and send them to the target location. span_processor = SimpleExportSpanProcessor(ConsoleSpanExporter()) trace.get_tracer_provider().add_span_processor(span_processor) app = flask.Flask(__name__) app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app) no idea how it is working, as mentioned earlier probably some cache but needs some investigation.
I guess this file is not being imported at all when running tox
, checking that...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching this @lonewolf3739, fixed and found one more missed reference in the docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To give more context
OpenTelemetryMiddleware
doesn't exist inutil-http
and ideally this import should fail;
opentelemetry-python/tests/w3c_tracecontext_validation_server.py
Lines 33 to 49 in 731f32c
from opentelemetry.util.http.wsgi import OpenTelemetryMiddleware # FIXME This could likely be avoided by integrating this script into the # standard test running mechanisms. # Integrations are the glue that binds the OpenTelemetry API and the # frameworks and libraries that are used together, automatically creating # Spans and propagating context as appropriate. trace.set_tracer_provider(TracerProvider()) RequestsInstrumentor().instrument() # SpanExporter receives the spans and send them to the target location. span_processor = SimpleExportSpanProcessor(ConsoleSpanExporter()) trace.get_tracer_provider().add_span_processor(span_processor) app = flask.Flask(__name__) app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app) no idea how it is working, as mentioned earlier probably some cache but needs some investigation.
I guess this file is not being imported at all when running
tox
, checking that...
Yes, that file is not part of any tox
-related test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that file gets run as part of the tracecontext job
@@ -10,7 +10,7 @@ env: | |||
# Otherwise, set variable to the commit of your branch on | |||
# opentelemetry-python-contrib which is compatible with these Core repo | |||
# changes. | |||
CONTRIB_REPO_SHA: f005d90ed3bc75ee6eb7297f9e3a6b55a55b22aa | |||
CONTRIB_REPO_SHA: b404de2f393aaaeca73694c37fe58fecf423a707 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is this commit from? I couldn't find this in recent main commits?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's in the contrib PR linked in the description
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, I was thinking may be this SHA was the reason, Probably not.
Description
A missing dependency in tox was causing the getting-started task to fail, this fixes that.
Does This PR Require a Contrib Repo Change?