-
Notifications
You must be signed in to change notification settings - Fork 880
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delete performance-metrics cron job #1659
Conversation
cbffcb9
to
f417e8c
Compare
🍹 The Update for pulumi/k8s-ci-cluster/5245315801c3ca03ad227b363e2591672aec28b2-1807 was successful. Resource Changes Name Type Operation
+ gke pulumi:providers:kubernetes create
+ k8s-ci-cluster-5245315801c3ca03ad227b363e2591672aec28b2-1807 pulumi:pulumi:Stack create
+ multicloud pulumi-kubernetes:ci:GkeCluster create
+ password random:index/randomPassword:RandomPassword create
+ ephemeral-ci-cluster gcp:container/cluster:Cluster create
+ primary-node-pool gcp:container/nodePool:NodePool create
|
🍹 The Update for pulumi/k8s-ci-cluster/02b13cc1c4b74185a1feb145a200a836bd109f6d-1808 was successful. Resource Changes Name Type Operation
+ k8s-ci-cluster-02b13cc1c4b74185a1feb145a200a836bd109f6d-1808 pulumi:pulumi:Stack create
+ multicloud pulumi-kubernetes:ci:GkeCluster create
+ password random:index/randomPassword:RandomPassword create
+ ephemeral-ci-cluster gcp:container/cluster:Cluster create
+ primary-node-pool gcp:container/nodePool:NodePool create
+ gke pulumi:providers:kubernetes create
|
Can we discuss moving this? our teams use this. |
@t0yv0 Sure -- but how do you use it? |
@t0yv0 Just noticed your comment here, about setting these metrics up to run in another repo: pulumi/templates#804 (comment) That sounds like the right approach to me. Since this repo (like pulumi/templates) doesn't monitor (and I suppose ultimately has no control over) the performance of the templates per se, having these metrics managed and captured elsewhere probably makes the most sense. That way, you could use any repository (or repositories) you wanted -- this one, pulumi/examples, others, etc. |
🍹 The Destroy for pulumi/k8s-ci-cluster/5245315801c3ca03ad227b363e2591672aec28b2-1807 was successful. Resource Changes Name Type Operation
- multicloud pulumi-kubernetes:ci:GkeCluster delete
- password random:index/randomPassword:RandomPassword delete
- k8s-ci-cluster-5245315801c3ca03ad227b363e2591672aec28b2-1807 pulumi:pulumi:Stack delete
- gke pulumi:providers:kubernetes delete
- primary-node-pool gcp:container/nodePool:NodePool delete
- ephemeral-ci-cluster gcp:container/cluster:Cluster delete
|
🍹 The Destroy for pulumi/k8s-ci-cluster/02b13cc1c4b74185a1feb145a200a836bd109f6d-1808 was successful. Resource Changes Name Type Operation
- password random:index/randomPassword:RandomPassword delete
- k8s-ci-cluster-02b13cc1c4b74185a1feb145a200a836bd109f6d-1808 pulumi:pulumi:Stack delete
- gke pulumi:providers:kubernetes delete
- primary-node-pool gcp:container/nodePool:NodePool delete
- ephemeral-ci-cluster gcp:container/cluster:Cluster delete
- multicloud pulumi-kubernetes:ci:GkeCluster delete
|
This system is documented in here https://github.com/pulumi/home/wiki/CLI-Performance-Metrics and Justin's and @mjeffryes team do look at alerts coming out of this system, but looks it's not robust to alerting on missing data. I think the best indeed would be to move it out to a repo where ownership is clear and we'll need to double-check that alerting on broken workflows is resulting in fixes through our ops rotation. |
Feel free to go ahead and merge in the meanwhile, we can reinstate the code through GH history. |
As with pulumi/templates#804, this PR deletes a long-failing workflow no one seems to be using.
Fixes #1627.