-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds Prometheus metrics for API and client #14783
Conversation
This adds the `prometheus_client` library as a client-side dependency. This is a very lightweight (<500KB compiled) library with no transitive dependencies. I'm choosing Prometheus here as it is the current standard for metrics from the CNCF, and because its Python client library is very light and thus suitable for inclusion on the client-side of Prefect. There are two "sides" that expose a Prometheus metrics endpoint: * The Prefect server will expose it from startup if `PREFECT_API_ENABLE_METRICS` is on. This endpoint is at `/api/metrics` (to avoid any potential collision with future UI routes). * The Prefect client SDK will expose these from any of the following points if `PREFECT_CLIENT_ENABLE_METRICS` is on: * Serving one or more flow via `serve(...)` * Serving one or more tasks via `serve(...)` * Entering a flow run context for a flow (e.g. entering the engine for a flow as a Kubernetes job) The client metrics will be served via `localhost` at the port defined in `PREFECT_CLIENT_METRICS_PORT` Note that I haven't included any Prefect-specific metrics here, so we will start with the stock Python-oriented metrics that come out of the box with `prometheus_client`.
CodSpeed Performance ReportMerging #14783 will not alter performanceComparing Summary
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good stuff!
Following the addition of the `prometheus_client` in #14783, this adds counters for measuring the performance of Prefect event emissions and subscription. A note about unit tests: I don't traditionally add unit tests for "leaf-level" instrumentation like this. Leaf-level here means it is measuring something about the system and doesn't form part of a measurement API (like if there were middleware for measuring HTTP latency, for example). Unless it is particularly complex to calculate, I generally skip extra unit tests and let the standard test suite and coverage inform me if the instrumentation might be a problem or not executed.
When can we expect this PR changes to be included in the release ? @jakekaplan @chrisguidry |
@akashtyagi08 these changes are in Prefect's 3.0 series, which we released on September 3! Give it a try and let us know how it goes, with either issues or a discussion. |
This adds the
prometheus_client
library as a client-side dependency. Thisis a very lightweight (<500KB compiled) library with no transitive dependencies.
I'm choosing Prometheus here as it is the current standard for metrics from the
CNCF, and because its Python client library is very light and thus suitable for
inclusion on the client-side of Prefect.
There are two "sides" that expose a Prometheus metrics endpoint:
The Prefect server will expose it from startup if
PREFECT_API_ENABLE_METRICS
is on. This endpoint is at
/api/metrics
(to avoid any potential collisionwith future UI routes).
The Prefect client SDK will expose these from any of the following points if
PREFECT_CLIENT_ENABLE_METRICS
is on:serve(...)
serve(...)
as a Kubernetes job)
The client metrics will be served via
localhost
at the port defined inPREFECT_CLIENT_METRICS_PORT
Note that I haven't included any Prefect-specific metrics here, so we will start
with the stock Python-oriented metrics that come out of the box with
prometheus_client
.Checklist
<link to issue>
"mint.json
.