diff --git a/README.md b/README.md index 751e386..4125e05 100644 --- a/README.md +++ b/README.md @@ -5,30 +5,35 @@ that focuses on benchmarking (and testing) the agents, so collection modes for P Feel free to use it as you wish. -### Usage +## Usage Check `make help` on what's possible. Then if anything is failing check `scripts/` bash scripts and adjust according to your setup. Those are shell script, will be always flaky for edge cases or races, but it's better than nothing (: The general flow looks as follows: * You setup your GKE cluster once: `make cluster-setup CLUSTER_NAME=my-prombenchy` -* Then to set up any benchmark run you do `make start CLUSTER_NAME=my-prombenchy BENCH_NAME= SCENARIO=./manifests/scenarios/gmp`. This will setup node-pool and your collector (e.g. as daemon set or separate pod - up to you, as long as you do correct node section!) +* Then to start any benchmark run you do `make start CLUSTER_NAME=my-prombenchy BENCH_NAME= SCENARIO=./manifests/scenarios/gmp`. This will setup node-pool and your collector (e.g. as daemon set or separate pod - up to you, as long as you do correct node section!) -You can start as many scenarios as you want on the single cluster (make sure to use unique `BENCH_NAME` though!) + You can start as many scenarios as you want on the single cluster (make sure to use unique `BENCH_NAME` though!). The scenario is a path to the "collector" manifest, so anything that will scrape `./manifests/load/avalanche.exampletarget.yaml`. Feel free to adjust anything in `./manifests/scenarios/` or add your own. You are also welcome to create custom scenarios under `scenarios/`, store them locally or propose to this repo. + + `prombenchy` setup uses separate meta-monitoring containers in `core` namespace: + * Separate Prometheus for gathering metrics about core resources and collectors (available locally, but also sends all to GCM). Make sure your pod has `app=collector` label and relevant port name has `-ins` suffix, to be scraped by this core Prometheus. There is also a dashboard you can apply to GCM in `./dashboards/`. + * Parca profiling agent scraping (30s interval) pods with `app=collector` for Go `pprof` endpoints in the default paths. Currently you need to port-forward 7070 from the pod to access profiles: `kubectl -n core port-forward pod/ 7070`. -The scenario is a path to the "collector" manifest, so anything that will scrape `./manifests/load/avalanche.exampletarget.yaml`. Feel free to adjust anything in `./manifests/scenarios/` or add your own. +* `make stop CLUSTER_NAME=my-prombenchy BENCH_NAME= SCENARIO=./manifests/scenarios/gmp` kill the node-pool and experiment. -This setup uses separate Prometheus for gathering metrics about core resources and collectors (available locally and in GCM). Make sure your pod has `app=collector` label and relevant port name has `-ins` suffix, to be scraped by this core Prometheus. There is also a dashboard you can apply to GCM in `./dashboards/`. +## Bonus CLI -* `make stop CLUSTER_NAME=my-prombenchy BENCH_NAME= SCENARIO=./manifests/scenarios/gmp` kill the node-pool and experiment. +See [tools/mtypes](./tools/mtypes) to learn about a small CLI for gathering statistics about metric types from a given scrape page. It also can "generate" [avalanche](https://github.com/prometheus-community/avalanche) flags. -### TODOs +## TODOs * [ ] All scenarios are GMP aware, so they send data to GCM. In the future, we plan to also benchmark remote-write or OTLP, but proper test reivers would need to be added. Help welcome! * [ ] Probably Go code for scripts instead of bash, for reliability. * [ ] Cleanup svc account permissions on stopped scenarios. -* [ ] Make config-reloader work with otel-collector (annoying to delete pod after config changes). +* [ ] Make config-reloader work with otel-collector and parca (annoying to delete pod after config changes). +* [ ] Public auth-ed IPs for accessing parca and prometheus details? -### Credits +## Credits This repo was started by sharing a lot of design and resources from https://github.com/prometheus/test-infra repo, which we maintain in the Prometheus team mostly for [prombench](https://github.com/prometheus/test-infra/tree/master/prombench) functionality. Kudos to prombench project for the hard work so far! Since then, it was completely redesigned and simplified. diff --git a/manifests/core/8b_parca.yaml b/manifests/core/8b_parca.yaml index 48c33f2..2402cda 100644 --- a/manifests/core/8b_parca.yaml +++ b/manifests/core/8b_parca.yaml @@ -28,35 +28,22 @@ data: relabel_configs: - action: keep source_labels: [__meta_kubernetes_pod_label_app] - regex: prometheus|managed-prometheus-collector - - source_labels: [__meta_kubernetes_pod_label_prometheus] - target_label: prometheus - - source_labels: [__meta_kubernetes_pod_node_name] - action: replace + regex: collector + - action: keep + source_labels: [__meta_kubernetes_pod_container_name] + regex: prometheus|otel-collector + - action: replace + source_labels: [__meta_kubernetes_pod_label_app] + target_label: job + - action: replace + source_labels: [__meta_kubernetes_namespace] + target_label: namespace + - action: replace + source_labels: [__meta_kubernetes_pod_node_name] target_label: node_name - - source_labels: [__meta_kubernetes_pod_label_benchmark] - target_label: benchmark - - source_labels: [ __address__ ] - target_label: instance - action: replace source_labels: [__meta_kubernetes_pod_container_name] target_label: container -# - source_labels: [__profile_path__] -# target_label: __init_profile_path -# - source_labels: [__meta_kubernetes_service_label_app, __init_profile_path] -# regex: prometheus-meta;(.*) -# replacement: /prometheus-meta$1 -# target_label: __profile_path__ -# - source_labels: [prometheus, pr_number, __init_profile_path] -# regex: test-.*;(.*);(.*) -# replacement: /$1/prometheus-release$2 -# target_label: __profile_path__ -# - source_labels: [prometheus, pr_number, __init_profile_path] -# regex: test-pr-.*;(.*);(.*) -# replacement: /$1/prometheus-pr$2 -# target_label: __profile_path__ -# - regex: __init_profile_path -# action: labeldrop --- apiVersion: apps/v1