-
-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CLI] Measure AsyncAPI adoption #841
Comments
Having in mind that CLI command executions are stateless, and short-lived processes, how are we gonna keep the concept of a session so we can track consequent executions of commands (let's call it funnels). The following questions should be answered soon or later in order to keep track of those sequence of executions:
|
I'm more interested with vision on:
regarding session? UUID would be definitely useful to identify "single" user. But why do we care about session. It is about AsyncAPI adoption right? what funnels have to do with this? For me this is different topic. btw, we also need a way to say "this is CI/CD run" |
Different topic from what, exactly?
You need to correlate events somehow, right? You can use a correlationID when sending those events to whatever place. That's the concept of a session as well. |
I'm gonna share what I think about the following:
It is a requirement to send something that acts as a correlation ID when sending those events, otherwise we won't be able to link them. New Relic, for example, has the Funnels feature. See https://docs.newrelic.com/docs/query-your-data/nrql-new-relic-query-language/nrql-query-tutorials/funnels-evaluate-data-series-related-events/ I find that the correlation ID should be replaced every certain period of time like a session that times out. Otherwise, results won't match with the reality, and instead events could be linked between them but with a really big diff in time. In fact until that correlation ID stored somewhere (/tmp ?) is removed and we recreate it.
UUID it's ok. Either v1 or v4.
I guess creating a file in |
different topic from
this is cool info but:
|
/progress 50 POC with the first metric is available in #859 |
Regarding the metrics to collect, I suggest we do a first iteration with at least just 1 metric that will track the execution of a few commands, carrying on metadata based on the AsyncAPI document in place.
I'm gonna write a list of commands and their metrics I consider as good starting point. Please note that wherever you see
Feel free to suggest any change. The idea is that we can create a working POC but not released, reporting metrics somewhere (ok with New Relic) so we can create a dashboard and validate, after running few commands locally, if the metrics make sense. |
I like it. This set of initial metrics is a good way to test it 👍 I would also be interested in having metrics of which command is used (for every command). So as soon as it's invoked, we send the metric saying, e.g., "config has been used" (not literally these words, just the concept). Once this is done, experimenting with "session" metrics to understand things like a single user has performed "validate" first and then "generate fromTemplate", for instance. In other words, anonymously identifying the user (machine) performing the action so we can group them together and have a funnel/timeline. |
You mean it is executed but still not finished, right? |
Yeah, I just want to answer the question "What are the most popular commands?". I don't care if it failed or succeeded. Just want to know it was called. So yeah, we can send the metric as soon as the command is invoked. We can probably put it here so it applies to every command: https://github.com/asyncapi/cli/blob/master/src/base.ts. |
Taking that into account, should we then omit (at this POC stage) sending the metadata regarding the result of the command and the info related to the command itself (like version converted, number of bundled files, numbers of optimizations, template used, etc)? |
Yes, this has been mentioned during a 1:1 chat we had Peter and me. To add that code into the I just wanted to confirm we are aligned. My concern on that is that, if we want to keep both the metric for starting the command, and another for when the command finishes which includes results (success true/false, converted from/to, etc), we would need to add a discriminator or rely on the property I think tracking finished executions is important because of the metadata extracted from the AsyncAPI documents once docs are parsed (number of servers, channels, etc). |
I definitely want to track the result as well. How we do it, it doesn't really matter much to me. For invocation we can use one metric and for those containing results we can use another one. It doesn't have to be the same, unless you want to really keep it clean and neat and somehow relate one metric to another. |
/progress 45 POC is getting more metrics on CLI. New Relic sink is ready to be used and test the metrics collection work and make sense |
/progress 60 Tested the whole integration with the NewRelic sink by executing locally the CLI. After finding several bugs in the implementation, we finally ended up sending metrics to a new temporary New Relic account so we can confirm all now works as expected. |
/progress 65 Working now on the implementation of a new metric in order to collect the "most popular commands", as mentioned above, regardless the command failed or succeeded. |
/progress 80 New metric |
/progress 90 Message to warn users about metrics collection already implemented. Now working on unifying the logic so that the metric |
Metrics Next actions to be completed:
|
/progress 95 Metrics already working correctly at both local and production (New Relic) environments, and markdown document about metrics collection created. Now working on a DX-friendly solution like |
/progress 99 New command |
/progress 100 POC for measuring adoption already merged. |
Related to asyncapi/community#879
Scope
The text was updated successfully, but these errors were encountered: