From 0a9e8f7c9e2d696904db32fc5c8c57df3d3aa99b Mon Sep 17 00:00:00 2001 From: Nicolas Takashi Date: Thu, 5 Dec 2024 09:14:39 +0000 Subject: [PATCH] Merge from Upstream (#1) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Prepare v0.110.0 release (#3336) * Ta update configs to enable mtls (#3015) * Initial commit * Added Cert Manager CRDs & RBAC validation and management * Added relevant resources and started adding tests * Bump github.com/gin-gonic/gin from 1.9.1 to 1.10.0 (#2953) Bumps [github.com/gin-gonic/gin](https://github.com/gin-gonic/gin) from 1.9.1 to 1.10.0. - [Release notes](https://github.com/gin-gonic/gin/releases) - [Changelog](https://github.com/gin-gonic/gin/blob/master/CHANGELOG.md) - [Commits](https://github.com/gin-gonic/gin/compare/v1.9.1...v1.10.0) --- updated-dependencies: - dependency-name: github.com/gin-gonic/gin dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/prometheus/prometheus in the prometheus group (#2951) Bumps the prometheus group with 1 update: [github.com/prometheus/prometheus](https://github.com/prometheus/prometheus). Updates `github.com/prometheus/prometheus` from 0.51.2 to 0.52.0 - [Release notes](https://github.com/prometheus/prometheus/releases) - [Changelog](https://github.com/prometheus/prometheus/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus/prometheus/compare/v0.51.2...v0.52.0) --- updated-dependencies: - dependency-name: github.com/prometheus/prometheus dependency-type: direct:production update-type: version-update:semver-minor dependency-group: prometheus ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Support for collector readinessProbe (#2944) * enable readiness Probe for otel operator Signed-off-by: Janario Oliveira * generate CRD and controller changes Signed-off-by: Janario Oliveira * Adjusted code to be similar to Liveness logic Signed-off-by: Janario Oliveira * Generated manifests Signed-off-by: Janario Oliveira * Add changelog Signed-off-by: Janario Oliveira * Fix lint Signed-off-by: Janario Oliveira * Removed readinessProbe from alpha CRD Signed-off-by: Janario Oliveira * Generated manifests Signed-off-by: Janario Oliveira * Fix lint Signed-off-by: Janario Oliveira * Centralized probe validation Signed-off-by: Janario Oliveira --------- Signed-off-by: Janario Oliveira Co-authored-by: hesam.hamdarsi * Bump github.com/docker/docker (#2954) Bumps [github.com/docker/docker](https://github.com/docker/docker) from 26.0.1+incompatible to 26.0.2+incompatible. - [Release notes](https://github.com/docker/docker/releases) - [Commits](https://github.com/docker/docker/compare/v26.0.1...v26.0.2) --- updated-dependencies: - dependency-name: github.com/docker/docker dependency-type: indirect ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Added new Log Enconder Config (#2927) * Added new Log Enconder Config Signed-off-by: Yuri Sa * Added new Log Enconder Config Signed-off-by: Yuri Sa * Added new Log Enconder Config Signed-off-by: Yuri Sa * Added new Log Enconder Config Signed-off-by: Yuri Sa * Added new Log Enconder Config Signed-off-by: Yuri Sa * Added new Log Enconder Config Signed-off-by: Yuri Sa * Added new Debug doc Signed-off-by: Yuri Sa --------- Signed-off-by: Yuri Sa * [chore] move VineethReddy02 to emeritus (#2957) Signed-off-by: Juraci Paixão Kröhling * Cleanup cluster roles and bindings (#2938) * Fix Signed-off-by: Pavol Loffay * Fix Signed-off-by: Pavol Loffay * Fix Signed-off-by: Pavol Loffay * Fix Signed-off-by: Pavol Loffay * Add test Signed-off-by: Pavol Loffay --------- Signed-off-by: Pavol Loffay * Fixed non-expected warnings on TA webhook. (#2962) Signed-off-by: Yuri Sa * Verify ServiceMonitor and PodMonitor are installed in prom cr availability check (#2964) * Verify ServiceMonitor and PodMonitor are installed in prom cr availability check * Added changelog * Bump kyverno/action-install-chainsaw from 0.2.0 to 0.2.1 (#2968) Bumps [kyverno/action-install-chainsaw](https://github.com/kyverno/action-install-chainsaw) from 0.2.0 to 0.2.1. - [Release notes](https://github.com/kyverno/action-install-chainsaw/releases) - [Commits](https://github.com/kyverno/action-install-chainsaw/compare/v0.2.0...v0.2.1) --- updated-dependencies: - dependency-name: kyverno/action-install-chainsaw dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Fix labels for Service Monitors (#2878) * Create a separate Service Monitor when the Prometheus exporter is present Signed-off-by: Israel Blancas * Improve changelog Signed-off-by: Israel Blancas * Fix prometheus-cr E2E test Signed-off-by: Israel Blancas * Remove unused target Signed-off-by: Israel Blancas * Add docstring Signed-off-by: Israel Blancas * Fix typo Signed-off-by: Israel Blancas * Change the label name Signed-off-by: Israel Blancas * Change changelog description Signed-off-by: Israel Blancas * Recover removed labels Signed-off-by: Israel Blancas * Add missing labels Signed-off-by: Israel Blancas * Remove wrong labels Signed-off-by: Israel Blancas --------- Signed-off-by: Israel Blancas * Prepare release 0.100.0 (#2960) * Prepare release 0.100.0 Signed-off-by: Vineeth Pothulapati * update the chlog * update the chlog with #2877 merge --------- Signed-off-by: Vineeth Pothulapati * [chore] Refactor allocation strategies (#2928) * Refactor consistent-hashing strategy * Refactor per-node strategy * Refactor least-weighted strategy * Minor allocation strategy refactor * Add some common allocation strategy tests * Fix collector and target reassignment * Minor allocator fixes * Add changelog entry * Fix an incorrect comment * Bring back webhook port (#2973) * add back webhook port * chlog * patch 0.100.1 (#2974) * Update the OpenTelemetry Java agent version to 2.4.0 (#2967) * simplify deletion logic (#2971) * Update maintainers in the operator hub PR (#2977) Signed-off-by: Pavol Loffay * Support for kubernetes 1.30 version (#2975) * Support for kubernetes 1.30 version * Update makefile * [chore] Move TargetAllocator CRD to v1alpha1 (#2918) * [featuregate] Automatically set GOMEMLIMIT and GOMAXPROCS for collector, target allocator, opamp bridge (#2933) * set things * fix kustomize shim * restore, better chlog * Fix querying OpenShift user workload monitoring stack. (#2984) * Bump alpine from 3.19 to 3.20 (#2990) Bumps alpine from 3.19 to 3.20. --- updated-dependencies: - dependency-name: alpine dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump alpine from 3.19 to 3.20 in /cmd/operator-opamp-bridge (#2991) Bumps alpine from 3.19 to 3.20. --- updated-dependencies: - dependency-name: alpine dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/go-logr/logr from 1.4.1 to 1.4.2 (#2987) Bumps [github.com/go-logr/logr](https://github.com/go-logr/logr) from 1.4.1 to 1.4.2. - [Release notes](https://github.com/go-logr/logr/releases) - [Changelog](https://github.com/go-logr/logr/blob/master/CHANGELOG.md) - [Commits](https://github.com/go-logr/logr/compare/v1.4.1...v1.4.2) --- updated-dependencies: - dependency-name: github.com/go-logr/logr dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump kyverno/action-install-chainsaw from 0.2.1 to 0.2.2 (#2989) Bumps [kyverno/action-install-chainsaw](https://github.com/kyverno/action-install-chainsaw) from 0.2.1 to 0.2.2. - [Release notes](https://github.com/kyverno/action-install-chainsaw/releases) - [Commits](https://github.com/kyverno/action-install-chainsaw/compare/v0.2.1...v0.2.2) --- updated-dependencies: - dependency-name: kyverno/action-install-chainsaw dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump the otel group with 5 updates (#2986) Bumps the otel group with 5 updates: | Package | From | To | | --- | --- | --- | | [go.opentelemetry.io/otel](https://github.com/open-telemetry/opentelemetry-go) | `1.26.0` | `1.27.0` | | [go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp](https://github.com/open-telemetry/opentelemetry-go) | `1.26.0` | `1.27.0` | | [go.opentelemetry.io/otel/metric](https://github.com/open-telemetry/opentelemetry-go) | `1.26.0` | `1.27.0` | | [go.opentelemetry.io/otel/sdk](https://github.com/open-telemetry/opentelemetry-go) | `1.26.0` | `1.27.0` | | [go.opentelemetry.io/otel/sdk/metric](https://github.com/open-telemetry/opentelemetry-go) | `1.26.0` | `1.27.0` | Updates `go.opentelemetry.io/otel` from 1.26.0 to 1.27.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.26.0...v1.27.0) Updates `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` from 1.26.0 to 1.27.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.26.0...v1.27.0) Updates `go.opentelemetry.io/otel/metric` from 1.26.0 to 1.27.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.26.0...v1.27.0) Updates `go.opentelemetry.io/otel/sdk` from 1.26.0 to 1.27.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.26.0...v1.27.0) Updates `go.opentelemetry.io/otel/sdk/metric` from 1.26.0 to 1.27.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.26.0...v1.27.0) --- updated-dependencies: - dependency-name: go.opentelemetry.io/otel dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/metric dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/sdk dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/sdk/metric dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump alpine from 3.19 to 3.20 in /cmd/otel-allocator (#2992) Bumps alpine from 3.19 to 3.20. --- updated-dependencies: - dependency-name: alpine dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Keep multiple versions of Collector Config (#2946) * Prepare v0.101.0 release (#2994) * Prepare v0.101.0 release * Undo kustomize stuff * Undo kustomize stuff again * Undo kustomize stuff again * Apply feedback * Add crd metrics usage information (#2825) * Add crd metrics usage information Signed-off-by: Ruben Vargas * Add mode metric Signed-off-by: Ruben Vargas * Refactor CR metrics Signed-off-by: Ruben Vargas * Add annotation to avoid generate Metrics Signed-off-by: Ruben Vargas * Add unit tests Signed-off-by: Ruben Vargas * remove space Signed-off-by: Ruben Vargas * remove global provider Signed-off-by: Ruben Vargas * Update main.go Co-authored-by: Israel Blancas * revert kusttomization.yaml Signed-off-by: Ruben Vargas * rename some constants Signed-off-by: Ruben Vargas * Add connectors metrics Signed-off-by: Ruben Vargas * Update chlog Signed-off-by: Ruben Vargas * merge new with init, rename some functions, improve changelog entry Signed-off-by: Ruben Vargas * improve todo comment Signed-off-by: Ruben Vargas * fix tests Signed-off-by: Ruben Vargas * set flag to default false Signed-off-by: Ruben Vargas * fix lint issues Signed-off-by: Ruben Vargas * breaking line Signed-off-by: Ruben Vargas * Use api reader to avoid cache issues Signed-off-by: Ruben Vargas * Add info metric to changelog entry Signed-off-by: Ruben Vargas --------- Signed-off-by: Ruben Vargas Co-authored-by: Israel Blancas * Update selector documentation for Target Allocator (#3001) * Bump github.com/prometheus/prometheus in the prometheus group (#3004) Bumps the prometheus group with 1 update: [github.com/prometheus/prometheus](https://github.com/prometheus/prometheus). Updates `github.com/prometheus/prometheus` from 0.52.0 to 0.52.1 - [Release notes](https://github.com/prometheus/prometheus/releases) - [Changelog](https://github.com/prometheus/prometheus/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus/prometheus/compare/v0.52.0...v0.52.1) --- updated-dependencies: - dependency-name: github.com/prometheus/prometheus dependency-type: direct:production update-type: version-update:semver-patch dependency-group: prometheus ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump kyverno/action-install-chainsaw from 0.2.2 to 0.2.3 (#3003) Bumps [kyverno/action-install-chainsaw](https://github.com/kyverno/action-install-chainsaw) from 0.2.2 to 0.2.3. - [Release notes](https://github.com/kyverno/action-install-chainsaw/releases) - [Commits](https://github.com/kyverno/action-install-chainsaw/compare/v0.2.2...v0.2.3) --- updated-dependencies: - dependency-name: kyverno/action-install-chainsaw dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Introduce simplified parsers (#2972) * Bump go.opentelemetry.io/otel/exporters/prometheus in the otel group (#3005) Bumps the otel group with 1 update: [go.opentelemetry.io/otel/exporters/prometheus](https://github.com/open-telemetry/opentelemetry-go). Updates `go.opentelemetry.io/otel/exporters/prometheus` from 0.48.0 to 0.49.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/example/prometheus/v0.48.0...example/prometheus/v0.49.0) --- updated-dependencies: - dependency-name: go.opentelemetry.io/otel/exporters/prometheus dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump go.uber.org/zap from 1.26.0 to 1.27.0 (#3006) Bumps [go.uber.org/zap](https://github.com/uber-go/zap) from 1.26.0 to 1.27.0. - [Release notes](https://github.com/uber-go/zap/releases) - [Changelog](https://github.com/uber-go/zap/blob/master/CHANGELOG.md) - [Commits](https://github.com/uber-go/zap/compare/v1.26.0...v1.27.0) --- updated-dependencies: - dependency-name: go.uber.org/zap dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Update Kafka version in e2e test (#3009) * [chore] Bump opentelemetry-autoinstrumentation-python to 0.45b0 (#3000) * chore: Bump opentelemetry-autoinstrumentation-python to 0.45b0 * [chore] add psycopg==0.45b0 * Fix annotation/label filter setting (#3008) * fix how options are loaded by removing special casing * oop * chlog * update to specific test * oop * Added Cert Manager CRDs & RBAC validation and management * Added relevant resources and started adding tests * minor change * Minor change * minor change * Cleanup * Cleanup, go tidy and resolved conflics * Restored local dev changes * Refactored, removed init container, minor changes * Use correct files in TLS config * Added default value to getHttpsListenAddr * Added flag to enable mTLS between the Target Allocator and the Collector. go mod cleanup * Using the enable mTLS flag * Using feature gate in place of command line flags to enable the feature * Removed flag from manager yaml * Added featuregate func description * Initial unit/e2e tests. some cleanup * Using TA params * Cleanup makefile from local changes * Added step to create cert manager RBAC for e2e mtls tests * Using Kustomize for patching certmanager permissions * Cleanup chainsaw test * Cleanup chainsaw tests * e2e test case verifying Collector got secret from TA over mTLS * Added changelog, fixed unit tests * restored makefile * Renamed fg import * Linting rules for imports * Added more tests, updated the readme * Added steps in e2e tests for new app * Ran go mod tidy * Added new variable to test TA's AddTAConfigToPromConfig * Setting otel-col-contrib 0.108.0 in e2e test until operator gets updated * Update pkg/featuregate/featuregate.go Co-authored-by: Jacob Aronoff * Added https, serviceMonitor and tls resources assertions to e2e tests * Using namespaced names for ClusterRoles * Cleanup * Added CertManager resources unit tests * Added unit tests and e2e assertions * Added missing assertion call * Update 00-install.yaml Removed collector image override for e2e test * Update pkg/featuregate/featuregate.go Co-authored-by: Mikołaj Świątek * Minor fixes * Fixed tests referencing logging exporter * Moved mTLS file naming consts * Added missing curly bracket * Update TA-update-configs-to-enable-mtls.yaml * Update pkg/featuregate/featuregate.go Co-authored-by: Mikołaj Świątek --------- Signed-off-by: dependabot[bot] Signed-off-by: Janario Oliveira Signed-off-by: Yuri Sa Signed-off-by: Juraci Paixão Kröhling Signed-off-by: Pavol Loffay Signed-off-by: Israel Blancas Signed-off-by: Vineeth Pothulapati Signed-off-by: Ruben Vargas Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Janario Oliveira Co-authored-by: hesam.hamdarsi Co-authored-by: Yuri Sa <48062171+yuriolisa@users.noreply.github.com> Co-authored-by: Juraci Paixão Kröhling Co-authored-by: Pavol Loffay Co-authored-by: Aksel Skaar Leirvaag <52233080+akselleirv@users.noreply.github.com> Co-authored-by: Israel Blancas Co-authored-by: Vineeth Pothulapati Co-authored-by: Mikołaj Świątek Co-authored-by: Jacob Aronoff Co-authored-by: OpenTelemetry Bot <107717825+opentelemetrybot@users.noreply.github.com> Co-authored-by: Vasi Vasireddy <41936996+vasireddy99@users.noreply.github.com> Co-authored-by: Ishwar Kanse Co-authored-by: Matt Hagenbuch Co-authored-by: Tyler Helmuth <12352919+TylerHelmuth@users.noreply.github.com> Co-authored-by: Ruben Vargas Co-authored-by: brandonkzw <3462248+brandonkzw@users.noreply.github.com> Co-authored-by: Mikołaj Świątek * Become emeritus TA maintainer (#3343) * fix(collector-webhook): ensure `stabilizationWindowSeconds` validation matches `k8s.io/api/autoscaling/v2` requirements (#3346) * fix(collector-webhook): ensure `stabilizationWindowSeconds` validation matches `k8s.io/api/autoscaling/v2` requirements * chore: add changelog for fix * Add TLS support to auto-instrumentation (#3338) * Add TLS support to auto-instrumentation Signed-off-by: Pavol Loffay * Fix Signed-off-by: Pavol Loffay * Fix Signed-off-by: Pavol Loffay * More validation Signed-off-by: Pavol Loffay --------- Signed-off-by: Pavol Loffay * Bump the otel group with 6 updates (#3352) Bumps the otel group with 6 updates: | Package | From | To | | --- | --- | --- | | [go.opentelemetry.io/otel](https://github.com/open-telemetry/opentelemetry-go) | `1.30.0` | `1.31.0` | | [go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp](https://github.com/open-telemetry/opentelemetry-go) | `1.30.0` | `1.31.0` | | [go.opentelemetry.io/otel/exporters/prometheus](https://github.com/open-telemetry/opentelemetry-go) | `0.52.0` | `0.53.0` | | [go.opentelemetry.io/otel/metric](https://github.com/open-telemetry/opentelemetry-go) | `1.30.0` | `1.31.0` | | [go.opentelemetry.io/otel/sdk](https://github.com/open-telemetry/opentelemetry-go) | `1.30.0` | `1.31.0` | | [go.opentelemetry.io/otel/sdk/metric](https://github.com/open-telemetry/opentelemetry-go) | `1.30.0` | `1.31.0` | Updates `go.opentelemetry.io/otel` from 1.30.0 to 1.31.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.30.0...v1.31.0) Updates `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` from 1.30.0 to 1.31.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.30.0...v1.31.0) Updates `go.opentelemetry.io/otel/exporters/prometheus` from 0.52.0 to 0.53.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/example/prometheus/v0.52.0...example/prometheus/v0.53.0) Updates `go.opentelemetry.io/otel/metric` from 1.30.0 to 1.31.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.30.0...v1.31.0) Updates `go.opentelemetry.io/otel/sdk` from 1.30.0 to 1.31.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.30.0...v1.31.0) Updates `go.opentelemetry.io/otel/sdk/metric` from 1.30.0 to 1.31.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.30.0...v1.31.0) --- updated-dependencies: - dependency-name: go.opentelemetry.io/otel dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/exporters/prometheus dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/metric dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/sdk dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/sdk/metric dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/cert-manager/cert-manager from 1.14.5 to 1.16.1 (#3353) Bumps [github.com/cert-manager/cert-manager](https://github.com/cert-manager/cert-manager) from 1.14.5 to 1.16.1. - [Release notes](https://github.com/cert-manager/cert-manager/releases) - [Changelog](https://github.com/cert-manager/cert-manager/blob/master/RELEASE.md) - [Commits](https://github.com/cert-manager/cert-manager/compare/v1.14.5...v1.16.1) --- updated-dependencies: - dependency-name: github.com/cert-manager/cert-manager dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Fix flaky target allocator test (#3355) The test involves measuring the rate limit for the Prometheus CR watcher , which is difficult to do given the potential timing differences between hosts. I basically just relaxed a lot of the checks - the test now passes if we take a lot longer than expected, and the tested interval is also larger to begin with. * add featuregate for k8s 1.28 native sidecar container (#2801) * add featuregate for k8s 1.28 native sidecar container Signed-off-by: Benedikt Bongartz * use slices methods to deal with sidecars Signed-off-by: Benedikt Bongartz * apply recommendations Signed-off-by: Benedikt Bongartz * test with k8s 1.29 Signed-off-by: Benedikt Bongartz --------- Signed-off-by: Benedikt Bongartz * Align insrumentation TLS config with collector (#3358) Signed-off-by: Pavol Loffay * Update export-to-cluster-logging-lokistack to detect storageclass (#3357) * fix upgrade testing (#3364) * add iblancasa to approvers (#3363) * upgrade: add test for version 0.110.0 (#3365) Signed-off-by: Benedikt Bongartz * Set OTEL_LOGS_EXPORTER for python (#3330) * Set OTEL_LOGS_EXPORTER for python * Add changelog * Update e2e tests * Fix some e2e * Fix instrumentation python test * Another e2e fix * Apply suggestions from code review * Update .chloggen/3330-python-otel-logs-exporter.yaml Co-authored-by: Jacob Aronoff --------- Co-authored-by: Jacob Aronoff * Update the OpenTelemetry Java agent version to 2.9.0 (#3366) * Update to v0.51.0 (#3367) Co-authored-by: Israel Blancas * Update chainsaw in makefile (#3347) Signed-off-by: Pavol Loffay * v1beta1: apply telemetry config defaults in webhook (#3361) Signed-off-by: Benedikt Bongartz Update .chloggen/default_telemetry_settings.yaml add another webhook test Signed-off-by: Benedikt Bongartz avoid using mapstructure Signed-off-by: Benedikt Bongartz test: assert on addr * Bump github.com/prometheus/client_golang from 1.20.4 to 1.20.5 (#3374) Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.20.4 to 1.20.5. - [Release notes](https://github.com/prometheus/client_golang/releases) - [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus/client_golang/compare/v1.20.4...v1.20.5) --- updated-dependencies: - dependency-name: github.com/prometheus/client_golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * prepare release v0.111.0 (#3351) * prepare release v0.111.0 Signed-off-by: Benedikt Bongartz * Apply suggestions from code review Co-authored-by: Mikołaj Świątek Signed-off-by: Benedikt Bongartz --------- Signed-off-by: Benedikt Bongartz Co-authored-by: Mikołaj Świątek * Add support for persistentVolumeClaimRetentionPolicy field (#3354) * Add support for persistentVolumeClaimRetentionPolicy field * Removed persistentVolumeClaimRetentionPolicy field from v1alpha1 * Added separate persistentVolumeClaimRetentionPolicy e2e test * Renamed persistentVolumeClaimRetentionPolicy e2e test * Removed unnecessary PersistentVolumeClaimRetentionPolicy function * removed persistentVolumeClaimRetentionPolicy e2e test due to version difficulty and low added value * Update apis/v1beta1/common.go Co-authored-by: Jacob Aronoff * Updating api doc with typo fix --------- Co-authored-by: Jacob Aronoff Co-authored-by: Israel Blancas * Add nodejs auto-instrumentation support for linux/s390x,linux/ppc64le (#3362) Signed-off-by: Pavol Loffay Co-authored-by: Israel Blancas * Install all tools using the same macro (#3376) controller-tools and envtest were installed in a different way than other tools for no good reason. Fix this. * Add user-specified instrumentation volume (#3285) * feat(vol): add custom instr volume spec * feat(vol): generate code * feat(vol): add unit test * feat(vol): update api docs * fix(vol): fix unit test * feat(vol): move validation to webhook * feat(vol): add e2e test * feat(vol): update bundle * fix(vol): fix bundle * feat(vol): add validation unit tests * meta: add changelog * feat: add ephemeral volume option * meta: update changelog * feat: generate * feat: adjust tests * feat: regenerate * fix: fix e2e volume test * feat: update manifest * fix: e2e test * Test operator metrics can be scraped by OpenShift Monitoring (#3377) * Remove TA maintainers code ownership (#3386) * target allocator don't run as root (#3385) * Support configuring java runtime from configmap or secret (env.valueFrom) (#3379) Signed-off-by: Pavol Loffay * Bump github.com/prometheus/common from 0.60.0 to 0.60.1 (#3399) Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.60.0 to 0.60.1. - [Release notes](https://github.com/prometheus/common/releases) - [Changelog](https://github.com/prometheus/common/blob/main/RELEASE.md) - [Commits](https://github.com/prometheus/common/compare/v0.60.0...v0.60.1) --- updated-dependencies: - dependency-name: github.com/prometheus/common dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump go.opentelemetry.io/collector/featuregate from 1.17.0 to 1.18.0 (#3398) Bumps [go.opentelemetry.io/collector/featuregate](https://github.com/open-telemetry/opentelemetry-collector) from 1.17.0 to 1.18.0. - [Release notes](https://github.com/open-telemetry/opentelemetry-collector/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-collector/blob/main/CHANGELOG-API.md) - [Commits](https://github.com/open-telemetry/opentelemetry-collector/compare/pdata/v1.17.0...pdata/v1.18.0) --- updated-dependencies: - dependency-name: go.opentelemetry.io/collector/featuregate dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump the kubernetes group with 7 updates (#3397) Bumps the kubernetes group with 7 updates: | Package | From | To | | --- | --- | --- | | [k8s.io/api](https://github.com/kubernetes/api) | `0.31.1` | `0.31.2` | | [k8s.io/apiextensions-apiserver](https://github.com/kubernetes/apiextensions-apiserver) | `0.31.1` | `0.31.2` | | [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) | `0.31.1` | `0.31.2` | | [k8s.io/client-go](https://github.com/kubernetes/client-go) | `0.31.1` | `0.31.2` | | [k8s.io/component-base](https://github.com/kubernetes/component-base) | `0.31.1` | `0.31.2` | | [k8s.io/kubectl](https://github.com/kubernetes/kubectl) | `0.31.1` | `0.31.2` | | [sigs.k8s.io/controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) | `0.19.0` | `0.19.1` | Updates `k8s.io/api` from 0.31.1 to 0.31.2 - [Commits](https://github.com/kubernetes/api/compare/v0.31.1...v0.31.2) Updates `k8s.io/apiextensions-apiserver` from 0.31.1 to 0.31.2 - [Release notes](https://github.com/kubernetes/apiextensions-apiserver/releases) - [Commits](https://github.com/kubernetes/apiextensions-apiserver/compare/v0.31.1...v0.31.2) Updates `k8s.io/apimachinery` from 0.31.1 to 0.31.2 - [Commits](https://github.com/kubernetes/apimachinery/compare/v0.31.1...v0.31.2) Updates `k8s.io/client-go` from 0.31.1 to 0.31.2 - [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md) - [Commits](https://github.com/kubernetes/client-go/compare/v0.31.1...v0.31.2) Updates `k8s.io/component-base` from 0.31.1 to 0.31.2 - [Commits](https://github.com/kubernetes/component-base/compare/v0.31.1...v0.31.2) Updates `k8s.io/kubectl` from 0.31.1 to 0.31.2 - [Commits](https://github.com/kubernetes/kubectl/compare/v0.31.1...v0.31.2) Updates `sigs.k8s.io/controller-runtime` from 0.19.0 to 0.19.1 - [Release notes](https://github.com/kubernetes-sigs/controller-runtime/releases) - [Changelog](https://github.com/kubernetes-sigs/controller-runtime/blob/main/RELEASE.md) - [Commits](https://github.com/kubernetes-sigs/controller-runtime/compare/v0.19.0...v0.19.1) --- updated-dependencies: - dependency-name: k8s.io/api dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: k8s.io/apiextensions-apiserver dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: k8s.io/apimachinery dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: k8s.io/client-go dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: k8s.io/component-base dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: k8s.io/kubectl dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: sigs.k8s.io/controller-runtime dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * HTTPD instrumentation: Safe modification of httpd.conf (#3395) * HTTPD instrumentation: Safe modification of httpd.conf * HTTPD instrumentation: added changelog file * HTTPD instrumentation: added changelog file, for opened issue #3401 * Bump github.com/prometheus/prometheus in the prometheus group (#3396) Bumps the prometheus group with 1 update: [github.com/prometheus/prometheus](https://github.com/prometheus/prometheus). Updates `github.com/prometheus/prometheus` from 0.54.1 to 0.55.0 - [Release notes](https://github.com/prometheus/prometheus/releases) - [Changelog](https://github.com/prometheus/prometheus/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus/prometheus/compare/v0.54.1...v0.55.0) --- updated-dependencies: - dependency-name: github.com/prometheus/prometheus dependency-type: direct:production update-type: version-update:semver-minor dependency-group: prometheus ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Israel Blancas * [autoinstrumentation/nodejs] Update to v0.52.0 (#3400) updates the nodejs auto-instrumentation to the latest versions * autoinstrumentation: install musl based autoinstrumentation in Python Docker image (#3384) * autoinstrumentation: install musl in Python Docker image * Add changelog * Test OpenTelemetry must-gather script (#3387) * Add automatic RBAC creation for kubeletstats receiver (#3388) * Add automatic RBAC creation for kubeletstats receiver Signed-off-by: Israel Blancas * Revert change Signed-off-by: Israel Blancas --------- Signed-off-by: Israel Blancas * Permission check fixed for the serviceaccount of the target allocator (#3391) * Permission check fixed for the serviceaccount of the target allocator * serviceaccount name included in warning message and unit tests are adjusted * Add a separate compatibility document (#3393) * Add Kubernetes support policy (#3406) * docs: outline go support policy (#2839) Signed-off-by: Benedikt Bongartz * Release 0.112.0 (#3405) * Release 0.112.0 Signed-off-by: Yuri Sa * Release 0.112.0 Signed-off-by: Yuri Sa --------- Signed-off-by: Yuri Sa * Generate only TargetAllocator CR from Collector CR (#3402) This is hidden behind a feature flag. Nothing changes by default. * Updating spec mutation logic of daemonset, statefulset, and deployment with mergeWithOverwriteWithEmptyValue (#3324) * e2e tests for additionalContainers added * additionalContainers related unit tests added for mutation * Changed apiversion to v1beta1 in nodeselector e2e test * removed explicit zero value for additionalContainers and changed apply to update in chainsaw test * added affinity in collector e2e tests * affinity unit tests added for daemonset, deployment, statefulset during mutation * collector container mutate args e2e tests added * Unit tests added for additional args mutation of collector container * e2e tests for changing labels in collectors * e2e tests for changing annotations in collectors * fix annotation change e2e test asserts * Error and label change related unit tests added for resource mutation * fix label change e2e tests for mutation * mutating the spec and labels of deployment, daemonset, statefulset with mergeWithOverwriteWithEmptyValue * Adjust reconcile tests to new mutation logic * Added chlog entry for new mutation logic * fix typo in mutate_test.go * Fix G601: Implicit memory aliasing in mutate_test.go * Revert "Adjust reconcile tests to new mutation logic" This reverts commit 9060661d6b82011f77cfa0cebd5a8580bfaa9111. * label and annotation changes with mergeWithOverride; adjust tests * copy over desired.spec.template.spec to existing.spec.template.spec * volumeClaimTemplates mutation through range * Change type to bugfix * Fix volume-claim-label e2e test --------- Co-authored-by: Israel Blancas * Inject K8S_NODE_NAME environment variable when using the kubeletstats receiver (#3389) * Add automatic RBAC creation for kubeletstats receiver Signed-off-by: Israel Blancas * Inject K8S_NODE_NAME environment variable when using the kubeletstats receiver Signed-off-by: Israel Blancas * Revert change Signed-off-by: Israel Blancas * Fix lint Signed-off-by: Israel Blancas * Add missing tests Signed-off-by: Israel Blancas * Remove debug statement Signed-off-by: Israel Blancas --------- Signed-off-by: Israel Blancas * Add a benchmark for the whole targets pipeline (#3415) * Python auto-instrumentation: handle musl based containers (#3332) * Python auto-instrumentation: handle musl based containers Build and and inject musl based python auto-instrumentation if proper annotation is configured: instrumentation.opentelemetry.io/otel-python-platform: "musl" Refs #2264 * Add changelog * fix indentation in e2e yaml * Assert specific command in musl e2e instrumentation test * Update README * Use a different struct for scrape target serialization (#3417) * Scrape config and probe support in target allocator (#3394) * Enable scrape config and probe support in TA * chlog * fix the stopping * remove log * downgrade to 1.22, oops * comments * Allow setting target allocator via label (#3411) * Allow setting target allocator via label * Move label definition to constants package * Fix context handling in collector webhook build validator * [Autoinstrumentation Nodejs] Support exporting traces via http using `OTEL_EXPORTER_OTLP_PROTOCOL` (#3413) * refactor(exporter): extract function to create traces exporter Issue #3412 * feat(exporter): Support OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf for exporting traces via http Closes #3412 * [chore] Don't check changelog links in CI (#3424) * bump otel dotnet autoinstrumentation to 1.9.0 (#3434) * Add automatic RBAC creation for k8sevents receiver (#3421) * Add automatic RBAC creation for k8sevents receiver Signed-off-by: Israel Blancas * Add missing file Signed-off-by: Israel Blancas --------- Signed-off-by: Israel Blancas * Create ServiceMonitor for operator metrics programmatically (#3371) * Create ServiceMonitor for operator metrics programmatically Signed-off-by: Israel Blancas * Apply changes requested in CR Signed-off-by: Israel Blancas * Apply changes requested in CR Signed-off-by: Israel Blancas --------- Signed-off-by: Israel Blancas * Prepare release 0.113.0 (#3437) * Prepare release 0.113.0 Signed-off-by: Pavol Loffay * Fix Signed-off-by: Pavol Loffay --------- Signed-off-by: Pavol Loffay * Stop unnecessarily caching in the CI lint workflow (#3440) * Bump go.opentelemetry.io/collector/featuregate from 1.18.0 to 1.19.0 (#3444) Bumps [go.opentelemetry.io/collector/featuregate](https://github.com/open-telemetry/opentelemetry-collector) from 1.18.0 to 1.19.0. - [Release notes](https://github.com/open-telemetry/opentelemetry-collector/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-collector/blob/main/CHANGELOG-API.md) - [Commits](https://github.com/open-telemetry/opentelemetry-collector/compare/pdata/v1.18.0...pdata/v1.19.0) --- updated-dependencies: - dependency-name: go.opentelemetry.io/collector/featuregate dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/prometheus/prometheus in the prometheus group (#3442) Bumps the prometheus group with 1 update: [github.com/prometheus/prometheus](https://github.com/prometheus/prometheus). Updates `github.com/prometheus/prometheus` from 0.55.0 to 0.55.1 - [Release notes](https://github.com/prometheus/prometheus/releases) - [Changelog](https://github.com/prometheus/prometheus/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus/prometheus/compare/v0.55.0...v0.55.1) --- updated-dependencies: - dependency-name: github.com/prometheus/prometheus dependency-type: direct:production update-type: version-update:semver-patch dependency-group: prometheus ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump the otel group with 6 updates (#3443) Bumps the otel group with 6 updates: | Package | From | To | | --- | --- | --- | | [go.opentelemetry.io/otel](https://github.com/open-telemetry/opentelemetry-go) | `1.31.0` | `1.32.0` | | [go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp](https://github.com/open-telemetry/opentelemetry-go) | `1.31.0` | `1.32.0` | | [go.opentelemetry.io/otel/exporters/prometheus](https://github.com/open-telemetry/opentelemetry-go) | `0.53.0` | `0.54.0` | | [go.opentelemetry.io/otel/metric](https://github.com/open-telemetry/opentelemetry-go) | `1.31.0` | `1.32.0` | | [go.opentelemetry.io/otel/sdk](https://github.com/open-telemetry/opentelemetry-go) | `1.31.0` | `1.32.0` | | [go.opentelemetry.io/otel/sdk/metric](https://github.com/open-telemetry/opentelemetry-go) | `1.31.0` | `1.32.0` | Updates `go.opentelemetry.io/otel` from 1.31.0 to 1.32.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.31.0...v1.32.0) Updates `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` from 1.31.0 to 1.32.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.31.0...v1.32.0) Updates `go.opentelemetry.io/otel/exporters/prometheus` from 0.53.0 to 0.54.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/example/prometheus/v0.53.0...exporters/prometheus/v0.54.0) Updates `go.opentelemetry.io/otel/metric` from 1.31.0 to 1.32.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.31.0...v1.32.0) Updates `go.opentelemetry.io/otel/sdk` from 1.31.0 to 1.32.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.31.0...v1.32.0) Updates `go.opentelemetry.io/otel/sdk/metric` from 1.31.0 to 1.32.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.31.0...v1.32.0) --- updated-dependencies: - dependency-name: go.opentelemetry.io/otel dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/exporters/prometheus dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/metric dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/sdk dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel - dependency-name: go.opentelemetry.io/otel/sdk/metric dependency-type: direct:production update-type: version-update:semver-minor dependency-group: otel ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Fix invalid manifests in e2e tests (#3449) * Store Prometheus labels as a slice instead of a map (#3422) * Drop half of the targets in relabel benchmark * Store Prometheus labels as a slice instead of a map This should be more efficient in general, and will let us skip target hashing completely in the future. * Refactor the relabel loop * Fix error when the operator metrics ServiceMonitor already exists (#3447) * Fix error when the operator metrics ServiceMonitor already exists Signed-off-by: Israel Blancas * Wrap the errors and log them Signed-off-by: Israel Blancas --------- Signed-off-by: Israel Blancas * Update the OpenTelemetry Java agent version to 2.10.0 (#3457) * Check operator metrics (#3458) * Add automatic RBAC creation for k8sobjects receiver (#3430) * Add automatic RBAC creation for k8sobjects receiver Signed-off-by: Israel Blancas * Fix documentation Signed-off-by: Israel Blancas * Move to v1beta1 Signed-off-by: Israel Blancas --------- Signed-off-by: Israel Blancas * Bump github.com/cert-manager/cert-manager from 1.16.1 to 1.16.2 (#3480) Bumps [github.com/cert-manager/cert-manager](https://github.com/cert-manager/cert-manager) from 1.16.1 to 1.16.2. - [Release notes](https://github.com/cert-manager/cert-manager/releases) - [Changelog](https://github.com/cert-manager/cert-manager/blob/master/RELEASE.md) - [Commits](https://github.com/cert-manager/cert-manager/compare/v1.16.1...v1.16.2) --- updated-dependencies: - dependency-name: github.com/cert-manager/cert-manager dependency-type: direct:production ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Fix assert for multi-cluster test (#3481) * Bump base memory requirements for python and go (#3473) * Bump base memory requirements for python and go - When auto-instrumenting applications, I have noticed that default memory limits are too tight for some languages. This leads to the following: - Intermitent OOMKilled events in init container when auto-instrumenting python applications. Eventually the pods are able to start. - OOMKilled events for sidecar containers in go applications. The pods are not able to start. - 64Mi seems to be enough to fix these issues. While some tweaking by users may still be necessary, the operator should work out-of-the-box for all supported languages. * Add changelog * Link issue in changelog * chore: replace gcr.io for kube-rbac-proxy image (#3485) * chore: replace gcr.io for kube-rbac-proxy image * chore: add Changelog * Test operator restart (#3486) * Cleanup nodejs dependencies (#3466) Signed-off-by: Pavol Loffay * Add allocation_fallback_strategy option as fallback strategy for per-node strategy (#3482) * Add least-weighted as fallback strategy for per-node strategy * Add changelog file * Change fallback strategy to consistent-hashing * Update changelog * Fix bad test condition that might pass even if target was not assigned * Make fallback strategy a config option * Update changelog * Add period to test comments * Add feature gate for enabling fallback strategy * Fix featuregate id * Update cmd/otel-allocator/allocation/per_node_test.go Co-authored-by: Mikołaj Świątek * Update cmd/otel-allocator/allocation/per_node_test.go Co-authored-by: Mikołaj Świątek * Update cmd/otel-allocator/allocation/per_node_test.go Co-authored-by: Mikołaj Świątek * Only add fallbackstrategy if nonempty * Remove unnecessary comments * Add unit test for fallbackstrategy feature gate * Update changelog --------- Co-authored-by: Mikołaj Świątek * Add automatic RBAC creation for k8scluster receiver (#3428) Signed-off-by: Israel Blancas * Add a warning message when one created collector needs extra RBAC permissions and the service account doesn't have them (#3433) * Add a warning message when one created collector needs extra RBAC permissions and the service account doesn't have them Signed-off-by: Israel Blancas * Fix nil Signed-off-by: Israel Blancas * Show an admission warning Signed-off-by: Israel Blancas * Apply changes requested in code review Signed-off-by: Israel Blancas --------- Signed-off-by: Israel Blancas * Bump go.opentelemetry.io/collector/featuregate from 1.19.0 to 1.20.0 (#3493) Bumps [go.opentelemetry.io/collector/featuregate](https://github.com/open-telemetry/opentelemetry-collector) from 1.19.0 to 1.20.0. - [Release notes](https://github.com/open-telemetry/opentelemetry-collector/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-collector/blob/main/CHANGELOG-API.md) - [Commits](https://github.com/open-telemetry/opentelemetry-collector/compare/pdata/v1.19.0...pdata/v1.20.0) --- updated-dependencies: - dependency-name: go.opentelemetry.io/collector/featuregate dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump github.com/Masterminds/semver/v3 from 3.3.0 to 3.3.1 (#3492) Bumps [github.com/Masterminds/semver/v3](https://github.com/Masterminds/semver) from 3.3.0 to 3.3.1. - [Release notes](https://github.com/Masterminds/semver/releases) - [Changelog](https://github.com/Masterminds/semver/blob/master/CHANGELOG.md) - [Commits](https://github.com/Masterminds/semver/compare/v3.3.0...v3.3.1) --- updated-dependencies: - dependency-name: github.com/Masterminds/semver/v3 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump the kubernetes group with 7 updates (#3491) Bumps the kubernetes group with 7 updates: | Package | From | To | | --- | --- | --- | | [k8s.io/api](https://github.com/kubernetes/api) | `0.31.2` | `0.31.3` | | [k8s.io/apiextensions-apiserver](https://github.com/kubernetes/apiextensions-apiserver) | `0.31.2` | `0.31.3` | | [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) | `0.31.2` | `0.31.3` | | [k8s.io/client-go](https://github.com/kubernetes/client-go) | `0.31.2` | `0.31.3` | | [k8s.io/component-base](https://github.com/kubernetes/component-base) | `0.31.2` | `0.31.3` | | [k8s.io/kubectl](https://github.com/kubernetes/kubectl) | `0.31.2` | `0.31.3` | | [sigs.k8s.io/controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) | `0.19.1` | `0.19.2` | Updates `k8s.io/api` from 0.31.2 to 0.31.3 - [Commits](https://github.com/kubernetes/api/compare/v0.31.2...v0.31.3) Updates `k8s.io/apiextensions-apiserver` from 0.31.2 to 0.31.3 - [Release notes](https://github.com/kubernetes/apiextensions-apiserver/releases) - [Commits](https://github.com/kubernetes/apiextensions-apiserver/compare/v0.31.2...v0.31.3) Updates `k8s.io/apimachinery` from 0.31.2 to 0.31.3 - [Commits](https://github.com/kubernetes/apimachinery/compare/v0.31.2...v0.31.3) Updates `k8s.io/client-go` from 0.31.2 to 0.31.3 - [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md) - [Commits](https://github.com/kubernetes/client-go/compare/v0.31.2...v0.31.3) Updates `k8s.io/component-base` from 0.31.2 to 0.31.3 - [Commits](https://github.com/kubernetes/component-base/compare/v0.31.2...v0.31.3) Updates `k8s.io/kubectl` from 0.31.2 to 0.31.3 - [Commits](https://github.com/kubernetes/kubectl/compare/v0.31.2...v0.31.3) Updates `sigs.k8s.io/controller-runtime` from 0.19.1 to 0.19.2 - [Release notes](https://github.com/kubernetes-sigs/controller-runtime/releases) - [Changelog](https://github.com/kubernetes-sigs/controller-runtime/blob/main/RELEASE.md) - [Commits](https://github.com/kubernetes-sigs/controller-runtime/compare/v0.19.1...v0.19.2) --- updated-dependencies: - dependency-name: k8s.io/api dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: k8s.io/apiextensions-apiserver dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: k8s.io/apimachinery dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: k8s.io/client-go dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: k8s.io/component-base dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: k8s.io/kubectl dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes - dependency-name: sigs.k8s.io/controller-runtime dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * [autoinstrumentation/nodejs] update node dependencies (#3475) This updates the dependencies of the autoinstrumentation for nodejs to the latest available versions. There have been some important bugfixes for us recently. Signed-off-by: Marius Svechla * [chore] Fix featuregate usage in controller tests (#3490) We haven't been unsetting feature gates in controller tests after ending the test, leading to them being enabled for the duration of the test suite. In one case, a test actually depended on this fact, and I needed to set the gate in it explicitly. Also switched to use the gates explicitly vs parsing flags. * Release v0.114.0 (#3498) * Create service for extensions (#3403) * feat: create service for extensions Signed-off-by: Ankit152 * chore: added extension service in manifest factories Signed-off-by: Ankit152 * chore: added unit test for extension service function Signed-off-by: Ankit152 * chore: added e2e tests for extensions Signed-off-by: Ankit152 --------- Signed-off-by: Ankit152 * Fix prometheus rule file (#3504) * Fix PrometheusRule file Signed-off-by: Yuri Sa * Fix PrometheusRule file Signed-off-by: Yuri Sa * Fix PrometheusRule file Signed-off-by: Yuri Sa --------- Signed-off-by: Yuri Sa * Bump github.com/stretchr/testify from 1.9.0 to 1.10.0 (#3506) Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.9.0 to 1.10.0. - [Release notes](https://github.com/stretchr/testify/releases) - [Commits](https://github.com/stretchr/testify/compare/v1.9.0...v1.10.0) --- updated-dependencies: - dependency-name: github.com/stretchr/testify dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Revert "Support configuring java runtime from configmap or secret (env.valueFrom)" (#3510) * Revert "Support configuring java runtime from configmap or secret (env.valueF…" This reverts commit 2b36f0d6f9498e3c82185a4a18f0c855c4da4a57. * chlog (#3511) --------- Signed-off-by: dependabot[bot] Signed-off-by: Janario Oliveira Signed-off-by: Yuri Sa Signed-off-by: Juraci Paixão Kröhling Signed-off-by: Pavol Loffay Signed-off-by: Israel Blancas Signed-off-by: Vineeth Pothulapati Signed-off-by: Ruben Vargas Signed-off-by: Benedikt Bongartz Signed-off-by: Israel Blancas Signed-off-by: Marius Svechla Signed-off-by: Ankit152 Co-authored-by: Mikołaj Świątek Co-authored-by: ItielOlenick <67790309+ItielOlenick@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Janario Oliveira Co-authored-by: hesam.hamdarsi Co-authored-by: Yuri Sa <48062171+yuriolisa@users.noreply.github.com> Co-authored-by: Juraci Paixão Kröhling Co-authored-by: Pavol Loffay Co-authored-by: Aksel Skaar Leirvaag <52233080+akselleirv@users.noreply.github.com> Co-authored-by: Israel Blancas Co-authored-by: Vineeth Pothulapati Co-authored-by: Jacob Aronoff Co-authored-by: OpenTelemetry Bot <107717825+opentelemetrybot@users.noreply.github.com> Co-authored-by: Vasi Vasireddy <41936996+vasireddy99@users.noreply.github.com> Co-authored-by: Ishwar Kanse Co-authored-by: Matt Hagenbuch Co-authored-by: Tyler Helmuth <12352919+TylerHelmuth@users.noreply.github.com> Co-authored-by: Ruben Vargas Co-authored-by: brandonkzw <3462248+brandonkzw@users.noreply.github.com> Co-authored-by: Mikołaj Świątek Co-authored-by: Kristina Pathak Co-authored-by: Brian Fox <878612+onematchfox@users.noreply.github.com> Co-authored-by: Ben B. Co-authored-by: Riccardo Magliocchetti Co-authored-by: Jared Tan Co-authored-by: Israel Blancas Co-authored-by: David Haja Co-authored-by: Joshua Narezo <21177895+jnarezo@users.noreply.github.com> Co-authored-by: Costis C. Co-authored-by: msvechla Co-authored-by: Ats Uiboupin Co-authored-by: Mateusz Łach Co-authored-by: Jorge Creixell Co-authored-by: Koshevy Anton (astlock) <988910+astlock@users.noreply.github.com> Co-authored-by: lhp-nemlig <159530308+lhp-nemlig@users.noreply.github.com> Co-authored-by: Ankit Kurmi --- ...nable-multiinstrumentation-by-default.yaml | 30 - .chloggen/3149-add-must-gather.yaml | 25 - .chloggen/add_all_receiver_defaults.yaml | 18 - .chloggen/fips.yaml | 19 - ...ing.yaml => fix-prometheus-rule-file.yaml} | 8 +- .chloggen/remove_localhost_fg.yaml | 36 - .../resource-attribute-from-annotations.yaml | 24 - ...s.yaml => revert-3379-otel-configmap.yaml} | 9 +- ...r_defaults.yaml => service-extension.yaml} | 6 +- .github/CODEOWNERS | 3 - .github/workflows/changelog.yaml | 13 - .github/workflows/continuous-integration.yaml | 2 - .github/workflows/e2e.yaml | 13 +- .../publish-autoinstrumentation-nodejs.yaml | 4 +- .../reusable-operator-hub-release.yaml | 4 +- .gitignore | 4 +- .linkspector.yml | 0 CHANGELOG.md | 283 + Makefile | 49 +- README.md | 70 +- RELEASE.md | 15 +- apis/v1alpha1/instrumentation_types.go | 57 + apis/v1alpha1/instrumentation_webhook.go | 68 +- apis/v1alpha1/instrumentation_webhook_test.go | 109 + apis/v1alpha1/targetallocator_webhook.go | 7 +- apis/v1alpha1/targetallocator_webhook_test.go | 28 +- apis/v1alpha1/zz_generated.deepcopy.go | 29 +- apis/v1beta1/collector_webhook.go | 26 +- apis/v1beta1/collector_webhook_test.go | 106 +- apis/v1beta1/common.go | 6 + apis/v1beta1/config.go | 107 +- apis/v1beta1/config_test.go | 83 +- apis/v1beta1/targetallocator_rbac.go | 2 +- apis/v1beta1/zz_generated.deepcopy.go | 6 + autoinstrumentation/dotnet/version.txt | 2 +- autoinstrumentation/java/version.txt | 2 +- autoinstrumentation/nodejs/package.json | 19 +- .../nodejs/src/autoinstrumentation.ts | 22 +- autoinstrumentation/python/Dockerfile | 19 +- ...emetry-operator.clusterserviceversion.yaml | 14 +- .../opentelemetry.io_instrumentations.yaml | 797 +++ ...ntelemetry.io_opentelemetrycollectors.yaml | 7 + ...er-manager-metrics-service_v1_service.yaml | 2 + ...nitoring.coreos.com_v1_prometheusrule.yaml | 24 + ...eus_rbac.authorization.k8s.io_v1_role.yaml | 15 + ...c.authorization.k8s.io_v1_rolebinding.yaml | 12 + ...emetry-operator.clusterserviceversion.yaml | 27 +- .../opentelemetry.io_instrumentations.yaml | 797 +++ ...ntelemetry.io_opentelemetrycollectors.yaml | 7 + cmd/otel-allocator/Dockerfile | 6 +- cmd/otel-allocator/README.md | 37 +- cmd/otel-allocator/allocation/allocator.go | 5 + .../allocation/allocator_test.go | 10 +- .../allocation/consistent_hashing.go | 5 +- .../allocation/least_weighted.go | 2 + cmd/otel-allocator/allocation/per_node.go | 18 +- .../allocation/per_node_test.go | 109 +- cmd/otel-allocator/allocation/strategy.go | 43 +- cmd/otel-allocator/allocation/testutils.go | 16 +- cmd/otel-allocator/benchmark_test.go | 192 + cmd/otel-allocator/config/config.go | 57 +- cmd/otel-allocator/config/config_test.go | 1 + cmd/otel-allocator/config/flags.go | 45 +- cmd/otel-allocator/config/flags_test.go | 10 +- .../config/testdata/config_test.yaml | 1 + cmd/otel-allocator/main.go | 8 +- cmd/otel-allocator/prehook/relabel.go | 29 +- cmd/otel-allocator/prehook/relabel_test.go | 9 +- cmd/otel-allocator/server/bench_test.go | 25 +- cmd/otel-allocator/server/mocks_test.go | 1 + cmd/otel-allocator/server/server.go | 50 +- cmd/otel-allocator/server/server_test.go | 65 +- cmd/otel-allocator/target/discovery.go | 40 +- cmd/otel-allocator/target/discovery_test.go | 2 +- cmd/otel-allocator/target/target.go | 44 +- cmd/otel-allocator/watcher/promOperator.go | 107 +- .../watcher/promOperator_test.go | 180 +- .../opentelemetry.io_instrumentations.yaml | 797 +++ ...ntelemetry.io_opentelemetrycollectors.yaml | 7 + config/default/kustomization.yaml | 2 - config/default/manager_auth_proxy_patch.yaml | 2 +- config/manager/kustomization.yaml | 1 + config/overlays/openshift/kustomization.yaml | 4 + config/overlays/openshift/manager-patch.yaml | 2 +- .../manager_auth_proxy_tls_patch.yaml | 29 + .../openshift/metrics_service_tls_patch.yaml | 7 + config/prometheus/kustomization.yaml | 2 - config/prometheus/monitor.yaml | 26 - config/rbac/role.yaml | 4 + controllers/builder_test.go | 2666 ++++++++- controllers/common.go | 37 +- .../opentelemetrycollector_controller.go | 27 +- controllers/reconcile_test.go | 98 +- controllers/suite_test.go | 15 +- controllers/targetallocator_controller.go | 147 +- .../targetallocator_reconciler_test.go | 179 + docs/api.md | 5187 ++++++++++++++--- docs/compatibility.md | 76 + go.mod | 122 +- go.sum | 254 +- internal/autodetect/autodetectutils/utils.go | 47 + internal/autodetect/certmanager/check.go | 55 + internal/autodetect/certmanager/operator.go | 30 + internal/autodetect/main.go | 32 + internal/autodetect/main_test.go | 97 +- internal/autodetect/rbac/check.go | 33 +- internal/components/builder.go | 8 +- internal/components/component.go | 7 + internal/components/extensions/helpers.go | 3 + internal/components/generic_parser.go | 12 + internal/components/multi_endpoint.go | 4 + internal/components/receivers/helpers.go | 14 +- internal/components/receivers/k8scluster.go | 87 + .../components/receivers/k8scluster_test.go | 164 + internal/components/receivers/k8sevents.go | 79 + internal/components/receivers/k8sobjects.go | 49 + .../components/receivers/k8sobjects_test.go | 136 + internal/components/receivers/kubeletstats.go | 95 + .../components/receivers/kubeletstats_test.go | 99 + .../single_endpoint_receiver_test.go | 1 - internal/config/main.go | 16 + internal/config/main_test.go | 12 + internal/config/options.go | 8 + internal/manifests/collector/collector.go | 29 + .../manifests/collector/collector_test.go | 343 ++ .../manifests/collector/config_replace.go | 4 +- internal/manifests/collector/configmap.go | 20 +- .../manifests/collector/configmap_test.go | 59 + internal/manifests/collector/container.go | 18 +- .../manifests/collector/container_test.go | 24 + internal/manifests/collector/rbac.go | 27 + internal/manifests/collector/service.go | 40 +- internal/manifests/collector/service_test.go | 201 + internal/manifests/collector/statefulset.go | 7 +- .../manifests/collector/statefulset_test.go | 39 + .../collector/targetallocator_test.go | 17 +- internal/manifests/collector/volume.go | 13 + internal/manifests/collector/volume_test.go | 52 + internal/manifests/mutate.go | 168 +- internal/manifests/mutate_test.go | 2444 ++++++++ internal/manifests/params.go | 3 + .../adapters/config_to_prom_config.go | 32 +- .../adapters/config_to_prom_config_test.go | 42 + .../manifests/targetallocator/certificate.go | 118 + .../targetallocator/certificate_test.go | 221 + .../manifests/targetallocator/configmap.go | 20 + .../targetallocator/configmap_test.go | 115 + .../manifests/targetallocator/container.go | 14 + .../targetallocator/container_test.go | 29 + internal/manifests/targetallocator/issuer.go | 63 + .../manifests/targetallocator/issuer_test.go | 113 + internal/manifests/targetallocator/service.go | 23 +- .../manifests/targetallocator/service_test.go | 33 + .../targetallocator/targetallocator.go | 11 + internal/manifests/targetallocator/volume.go | 13 + .../manifests/targetallocator/volume_test.go | 61 + internal/naming/main.go | 40 + internal/operator-metrics/metrics.go | 197 + internal/operator-metrics/metrics_test.go | 201 + internal/rbac/access.go | 7 + internal/rbac/format.go | 15 +- internal/rbac/format_test.go | 6 +- .../webhook/podmutation/webhookhandler.go | 2 +- main.go | 65 +- pkg/collector/upgrade/suite_test.go | 7 + pkg/collector/upgrade/upgrade.go | 20 +- pkg/collector/upgrade/upgrade_test.go | 2 +- pkg/collector/upgrade/v0_104_0_test.go | 7 +- pkg/collector/upgrade/v0_105_0_test.go | 3 +- pkg/collector/upgrade/v0_110_0_test.go | 66 + pkg/collector/upgrade/v0_111_0.go | 23 + pkg/collector/upgrade/v0_111_0_test.go | 98 + pkg/collector/upgrade/v0_15_0_test.go | 3 +- pkg/collector/upgrade/v0_19_0_test.go | 7 +- pkg/collector/upgrade/v0_24_0_test.go | 3 +- pkg/collector/upgrade/v0_31_0_test.go | 3 +- pkg/collector/upgrade/v0_36_0_test.go | 3 +- pkg/collector/upgrade/v0_38_0_test.go | 3 +- pkg/collector/upgrade/v0_39_0_test.go | 3 +- pkg/collector/upgrade/v0_41_0_test.go | 3 +- pkg/collector/upgrade/v0_43_0_test.go | 3 +- pkg/collector/upgrade/v0_56_0_test.go | 3 +- pkg/collector/upgrade/v0_57_2_test.go | 3 +- pkg/collector/upgrade/v0_61_0_test.go | 3 +- pkg/collector/upgrade/v0_9_0_test.go | 3 +- pkg/collector/upgrade/versions.go | 4 + pkg/constants/env.go | 22 +- pkg/featuregate/featuregate.go | 35 + pkg/instrumentation/annotation.go | 1 + pkg/instrumentation/apachehttpd.go | 13 +- pkg/instrumentation/apachehttpd_test.go | 10 +- pkg/instrumentation/dotnet.go | 15 +- pkg/instrumentation/exporter.go | 150 + pkg/instrumentation/exporter_test.go | 209 + pkg/instrumentation/helper.go | 23 + pkg/instrumentation/helper_test.go | 87 + pkg/instrumentation/javaagent.go | 17 +- pkg/instrumentation/nodejs.go | 15 +- pkg/instrumentation/podmutator.go | 59 + pkg/instrumentation/podmutator_test.go | 153 +- pkg/instrumentation/python.go | 65 +- pkg/instrumentation/python_test.go | 265 +- pkg/instrumentation/sdk.go | 12 +- pkg/instrumentation/sdk_test.go | 6 +- pkg/sidecar/pod.go | 38 +- pkg/sidecar/pod_test.go | 110 + .../clusterresourcequotas.yaml | 11 + .../extra-permissions-operator/cronjobs.yaml | 12 + .../daemonsets.yaml | 11 + .../extra-permissions-operator/events.yaml | 11 + .../extensions.yaml | 13 + .../namespaces-status.yaml | 11 + .../nodes-proxy.yaml | 11 + .../nodes-spec.yaml | 12 + .../pod-status.yaml | 12 + .../replicationcontrollers.yaml | 12 + .../resourcequotas.yaml | 11 + .../receiver-k8scluster/00-install.yaml | 4 + .../receiver-k8scluster/01-assert.yaml | 80 + .../receiver-k8scluster/01-install.yaml | 18 + .../receiver-k8scluster/02-assert.yaml | 88 + .../receiver-k8scluster/02-install.yaml | 19 + .../receiver-k8sevents/00-install.yaml | 4 + .../receiver-k8sevents/01-assert.yaml | 80 + .../receiver-k8sevents/01-install.yaml | 18 + .../receiver-k8sevents/chainsaw-test.yaml | 18 + .../receiver-k8sobjects/00-install.yaml | 4 + .../receiver-k8sobjects/01-assert.yaml | 31 + .../receiver-k8sobjects/01-install.yaml | 22 + .../receiver-k8sobjects/chainsaw-test.yaml | 18 + .../receiver-kubeletstats/00-install.yaml | 4 + .../receiver-kubeletstats/01-assert.yaml | 48 + .../receiver-kubeletstats/01-install.yaml | 19 + .../receiver-kubeletstats/02-assert.yaml | 30 + .../receiver-kubeletstats/02-install.yaml | 20 + .../receiver-kubeletstats/03-assert.yaml | 30 + .../receiver-kubeletstats/03-install.yaml | 20 + .../receiver-kubeletstats/chainsaw-test.yaml | 30 + .../instrumentation-java-tls/.gitignore | 2 + .../00-install-collector.yaml | 43 + .../00-install-instrumentation.yaml | 19 + .../instrumentation-java-tls/01-assert.yaml | 70 + .../01-install-app.yaml | 27 + .../instrumentation-java-tls/ca.yaml | 30 + .../chainsaw-test.yaml | 46 + .../client-secret.yaml | 9 + .../generate-certs.sh | 14 + .../server-secret.yaml | 9 + .../01-install-app.yaml | 2 - .../01-install-app.yaml | 2 - .../02-install-app.yaml | 3 - .../00-install-collector.yaml | 22 + .../00-install-instrumentation.yaml | 38 + .../01-assert.yaml | 84 + .../01-install-app.yaml | 32 + .../chainsaw-test.yaml | 40 + .../01-assert.yaml | 4 + .../02-assert.yaml | 2 + .../00-install-collector.yaml | 22 + .../00-install-instrumentation.yaml | 30 + .../01-assert.yaml | 85 + .../01-install-app.yaml | 29 + .../chainsaw-test.yaml | 40 + .../instrumentation-python/01-assert.yaml | 2 + .../02-assert.yaml | 5 + .../01-assert.yaml | 5 + tests/e2e-native-sidecar/00-assert.yaml | 22 + tests/e2e-native-sidecar/00-install.yaml | 41 + tests/e2e-native-sidecar/chainsaw-test.yaml | 14 + .../chainsaw-test.yaml | 10 + .../install-loki.yaml | 2 +- tests/e2e-openshift/monitoring/03-assert.yaml | 1 + .../03-create-monitoring-roles.yaml | 2 +- .../monitoring/chainsaw-test.yaml | 26 +- .../e2e-openshift/monitoring/check_metrics.sh | 24 +- .../multi-cluster/04-assert.yaml | 8 +- .../must-gather/assert-install-app.yaml | 77 + .../assert-install-target-allocator.yaml | 93 + .../must-gather/chainsaw-test.yaml | 70 + .../must-gather/check_must_gather.sh | 51 + .../must-gather/install-app.yaml | 31 + .../install-collector-sidecar.yaml | 22 + .../must-gather/install-instrumentation.yaml | 33 + .../must-gather/install-target-allocator.yaml | 70 + .../00-install-jaeger.yaml | 7 - .../02-otel-metrics-collector.yaml | 10 +- .../otlp-metrics-traces/chainsaw-test.yaml | 22 + .../otlp-metrics-traces/check_must_gather.sh | 41 + .../certmanager-permissions/certmanager.yaml | 17 + .../ta-collector-mtls/00-assert.yaml | 89 + .../ta-collector-mtls/00-install.yaml | 187 + .../ta-collector-mtls/01-assert.yaml | 29 + .../ta-collector-mtls/01-install.yaml | 78 + .../ta-collector-mtls/02-assert.yaml | 20 + .../ta-collector-mtls/02-install.yaml | 63 + .../ta-collector-mtls/chainsaw-test.yaml | 34 + .../targetallocator-label/00-assert.yaml | 40 + .../targetallocator-label/00-install.yaml | 30 + .../01-add-ta-label.yaml | 26 + .../targetallocator-label/01-assert.yaml | 39 + .../targetallocator-label/02-assert.yaml | 39 + .../02-change-collector-config.yaml | 22 + .../targetallocator-label/03-assert.yaml | 10 + .../targetallocator-label/chainsaw-test.yaml | 50 + .../targetallocator-features/00-assert.yaml | 2 +- .../targetallocator-features/00-install.yaml | 1 - .../00-assert.yaml | 5 +- .../00-assert.yaml | 5 +- .../opentelemetry-operator-v0.86.0.yaml | 2 +- ...emonset-without-additional-containers.yaml | 14 + ...loyment-without-additional-containers.yaml | 14 + ...efulset-without-additional-containers.yaml | 14 + ...lectors-without-additional-containers.yaml | 73 + ...-daemonset-with-additional-containers.yaml | 16 + ...deployment-with-additional-containers.yaml | 16 + ...tatefulset-with-additional-containers.yaml | 16 + ...collectors-with-additional-containers.yaml | 88 + ...t-with-modified-additional-containers.yaml | 18 + ...t-with-modified-additional-containers.yaml | 18 + ...t-with-modified-additional-containers.yaml | 18 + ...dify-collectors-additional-containers.yaml | 82 + .../chainsaw-test.yaml | 66 + .../00-assert-daemonset-without-affinity.yaml | 13 + ...00-assert-deployment-without-affinity.yaml | 13 + ...0-assert-statefulset-without-affinity.yaml | 13 + ...0-install-collectors-without-affinity.yaml | 73 + .../01-assert-daemonset-with-affinity.yaml | 13 + .../01-assert-deployment-with-affinity.yaml | 13 + .../01-assert-statefulset-with-affinity.yaml | 13 + .../01-install-collectors-with-affinity.yaml | 100 + ...sert-daemonset-with-modified-affinity.yaml | 16 + ...ert-deployment-with-modified-affinity.yaml | 16 + ...rt-statefulset-with-modified-affinity.yaml | 16 + .../02-modify-collectors-affinity.yaml | 103 + .../e2e/affinity-collector/chainsaw-test.yaml | 66 + ...ssert-daemonset-with-extra-annotation.yaml | 11 + ...sert-deployment-with-extra-annotation.yaml | 11 + ...ert-statefulset-with-extra-annotation.yaml | 11 + ...tall-collectors-with-extra-annotation.yaml | 73 + ...sert-daemonset-with-annotation-change.yaml | 13 + ...ert-deployment-with-annotation-change.yaml | 13 + ...rt-statefulset-with-annotation-change.yaml | 13 + ...all-collectors-with-annotation-change.yaml | 76 + ...rt-daemonset-without-extra-annotation.yaml | 15 + ...t-deployment-without-extra-annotation.yaml | 15 + ...-statefulset-without-extra-annotation.yaml | 15 + ...l-collectors-without-extra-annotation.yaml | 67 + .../02-manual-annotation-resources.yaml | 35 + .../chainsaw-test.yaml | 53 + .../00-assert-daemonset-without-args.yaml | 16 + .../00-assert-deployment-without-args.yaml | 16 + .../00-assert-statefulset-without-args.yaml | 16 + .../00-install-collectors-without-args.yaml | 73 + .../01-assert-daemonset-with-args.yaml | 15 + .../01-assert-deployment-with-args.yaml | 15 + .../01-assert-statefulset-with-args.yaml | 15 + .../01-install-collectors-with-args.yaml | 79 + ...2-assert-daemonset-with-modified-args.yaml | 16 + ...-assert-deployment-with-modified-args.yaml | 16 + ...assert-statefulset-with-modified-args.yaml | 16 + .../02-modify-collectors-args.yaml | 79 + tests/e2e/args-collector/chainsaw-test.yaml | 66 + tests/e2e/extension/00-assert.yaml | 140 + tests/e2e/extension/00-install.yaml | 30 + tests/e2e/extension/chainsaw-test.yaml | 14 + .../00-assert-daemonset-with-extra-label.yaml | 14 + ...00-assert-deployment-with-extra-label.yaml | 14 + ...0-assert-statefulset-with-extra-label.yaml | 14 + ...0-install-collectors-with-extra-label.yaml | 73 + ...01-assert-daemonset-with-label-change.yaml | 16 + ...1-assert-deployment-with-label-change.yaml | 16 + ...-assert-statefulset-with-label-change.yaml | 16 + ...-install-collectors-with-label-change.yaml | 76 + ...-assert-daemonset-without-extra-label.yaml | 18 + ...assert-deployment-without-extra-label.yaml | 18 + ...ssert-statefulset-without-extra-label.yaml | 18 + ...nstall-collectors-without-extra-label.yaml | 67 + .../02-manual-labeling-resources.yaml | 35 + .../label-change-collector/chainsaw-test.yaml | 53 + tests/e2e/managed-reconcile/02-assert.yaml | 5 +- tests/e2e/multiple-configmaps/00-assert.yaml | 2 +- ...tall-collectors-without-node-selector.yaml | 38 +- ...install-collectors-with-node-selector.yaml | 38 +- .../operator-restart/assert-operator-pod.yaml | 16 + tests/e2e/operator-restart/chainsaw-test.yaml | 36 + .../e2e/smoke-targetallocator/00-assert.yaml | 5 +- tests/e2e/statefulset-features/00-assert.yaml | 2 +- tests/e2e/statefulset-features/01-assert.yaml | 2 +- tests/e2e/versioned-configmaps/00-assert.yaml | 4 +- tests/e2e/versioned-configmaps/01-assert.yaml | 6 +- tests/e2e/volume-claim-label/00-assert.yaml | 42 + tests/e2e/volume-claim-label/00-install.yaml | 35 + tests/e2e/volume-claim-label/01-assert.yaml | 42 + ...1-update-volume-claim-template-labels.yaml | 35 + .../e2e/volume-claim-label/chainsaw-test.yaml | 20 + versions.txt | 12 +- 396 files changed, 24952 insertions(+), 2171 deletions(-) delete mode 100755 .chloggen/3090-enable-multiinstrumentation-by-default.yaml delete mode 100755 .chloggen/3149-add-must-gather.yaml delete mode 100755 .chloggen/add_all_receiver_defaults.yaml delete mode 100755 .chloggen/fips.yaml rename .chloggen/{improve-probe-parsing.yaml => fix-prometheus-rule-file.yaml} (75%) delete mode 100755 .chloggen/remove_localhost_fg.yaml delete mode 100755 .chloggen/resource-attribute-from-annotations.yaml rename .chloggen/{container-names.yaml => revert-3379-otel-configmap.yaml} (68%) rename .chloggen/{add_receiver_defaults.yaml => service-extension.yaml} (85%) create mode 100644 .linkspector.yml create mode 100644 bundle/openshift/manifests/opentelemetry-operator-prometheus-rules_monitoring.coreos.com_v1_prometheusrule.yaml create mode 100644 bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml create mode 100644 bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml create mode 100644 cmd/otel-allocator/benchmark_test.go create mode 100644 config/overlays/openshift/manager_auth_proxy_tls_patch.yaml create mode 100644 config/overlays/openshift/metrics_service_tls_patch.yaml delete mode 100644 config/prometheus/kustomization.yaml delete mode 100644 config/prometheus/monitor.yaml create mode 100644 controllers/targetallocator_reconciler_test.go create mode 100644 docs/compatibility.md create mode 100644 internal/autodetect/autodetectutils/utils.go create mode 100644 internal/autodetect/certmanager/check.go create mode 100644 internal/autodetect/certmanager/operator.go create mode 100644 internal/components/receivers/k8scluster.go create mode 100644 internal/components/receivers/k8scluster_test.go create mode 100644 internal/components/receivers/k8sevents.go create mode 100644 internal/components/receivers/k8sobjects.go create mode 100644 internal/components/receivers/k8sobjects_test.go create mode 100644 internal/components/receivers/kubeletstats.go create mode 100644 internal/components/receivers/kubeletstats_test.go create mode 100644 internal/manifests/collector/collector_test.go create mode 100644 internal/manifests/targetallocator/certificate.go create mode 100644 internal/manifests/targetallocator/certificate_test.go create mode 100644 internal/manifests/targetallocator/issuer.go create mode 100644 internal/manifests/targetallocator/issuer_test.go create mode 100644 internal/operator-metrics/metrics.go create mode 100644 internal/operator-metrics/metrics_test.go create mode 100644 pkg/collector/upgrade/v0_110_0_test.go create mode 100644 pkg/collector/upgrade/v0_111_0.go create mode 100644 pkg/collector/upgrade/v0_111_0_test.go create mode 100644 pkg/instrumentation/exporter.go create mode 100644 pkg/instrumentation/exporter_test.go create mode 100644 tests/e2e-automatic-rbac/extra-permissions-operator/clusterresourcequotas.yaml create mode 100644 tests/e2e-automatic-rbac/extra-permissions-operator/cronjobs.yaml create mode 100644 tests/e2e-automatic-rbac/extra-permissions-operator/daemonsets.yaml create mode 100644 tests/e2e-automatic-rbac/extra-permissions-operator/events.yaml create mode 100644 tests/e2e-automatic-rbac/extra-permissions-operator/extensions.yaml create mode 100644 tests/e2e-automatic-rbac/extra-permissions-operator/namespaces-status.yaml create mode 100644 tests/e2e-automatic-rbac/extra-permissions-operator/nodes-proxy.yaml create mode 100644 tests/e2e-automatic-rbac/extra-permissions-operator/nodes-spec.yaml create mode 100644 tests/e2e-automatic-rbac/extra-permissions-operator/pod-status.yaml create mode 100644 tests/e2e-automatic-rbac/extra-permissions-operator/replicationcontrollers.yaml create mode 100644 tests/e2e-automatic-rbac/extra-permissions-operator/resourcequotas.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8scluster/00-install.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8scluster/01-assert.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8scluster/01-install.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8scluster/02-assert.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8scluster/02-install.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8sevents/00-install.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8sevents/01-assert.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8sevents/01-install.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8sevents/chainsaw-test.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8sobjects/00-install.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8sobjects/01-assert.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8sobjects/01-install.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-k8sobjects/chainsaw-test.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-kubeletstats/00-install.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-kubeletstats/01-assert.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-kubeletstats/01-install.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-kubeletstats/02-assert.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-kubeletstats/02-install.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-kubeletstats/03-assert.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-kubeletstats/03-install.yaml create mode 100644 tests/e2e-automatic-rbac/receiver-kubeletstats/chainsaw-test.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-java-tls/.gitignore create mode 100644 tests/e2e-instrumentation/instrumentation-java-tls/00-install-collector.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-java-tls/00-install-instrumentation.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-java-tls/01-assert.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-java-tls/01-install-app.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-java-tls/ca.yaml create mode 100755 tests/e2e-instrumentation/instrumentation-java-tls/chainsaw-test.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-java-tls/client-secret.yaml create mode 100755 tests/e2e-instrumentation/instrumentation-java-tls/generate-certs.sh create mode 100644 tests/e2e-instrumentation/instrumentation-java-tls/server-secret.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-nodejs-volume/00-install-collector.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-nodejs-volume/00-install-instrumentation.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-nodejs-volume/01-assert.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-nodejs-volume/01-install-app.yaml create mode 100755 tests/e2e-instrumentation/instrumentation-nodejs-volume/chainsaw-test.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-python-musl/00-install-collector.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-python-musl/00-install-instrumentation.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-python-musl/01-assert.yaml create mode 100644 tests/e2e-instrumentation/instrumentation-python-musl/01-install-app.yaml create mode 100755 tests/e2e-instrumentation/instrumentation-python-musl/chainsaw-test.yaml create mode 100644 tests/e2e-native-sidecar/00-assert.yaml create mode 100644 tests/e2e-native-sidecar/00-install.yaml create mode 100755 tests/e2e-native-sidecar/chainsaw-test.yaml create mode 100644 tests/e2e-openshift/must-gather/assert-install-app.yaml create mode 100644 tests/e2e-openshift/must-gather/assert-install-target-allocator.yaml create mode 100755 tests/e2e-openshift/must-gather/chainsaw-test.yaml create mode 100755 tests/e2e-openshift/must-gather/check_must_gather.sh create mode 100644 tests/e2e-openshift/must-gather/install-app.yaml create mode 100644 tests/e2e-openshift/must-gather/install-collector-sidecar.yaml create mode 100644 tests/e2e-openshift/must-gather/install-instrumentation.yaml create mode 100644 tests/e2e-openshift/must-gather/install-target-allocator.yaml create mode 100755 tests/e2e-openshift/otlp-metrics-traces/check_must_gather.sh create mode 100644 tests/e2e-ta-collector-mtls/certmanager-permissions/certmanager.yaml create mode 100644 tests/e2e-ta-collector-mtls/ta-collector-mtls/00-assert.yaml create mode 100644 tests/e2e-ta-collector-mtls/ta-collector-mtls/00-install.yaml create mode 100644 tests/e2e-ta-collector-mtls/ta-collector-mtls/01-assert.yaml create mode 100644 tests/e2e-ta-collector-mtls/ta-collector-mtls/01-install.yaml create mode 100644 tests/e2e-ta-collector-mtls/ta-collector-mtls/02-assert.yaml create mode 100644 tests/e2e-ta-collector-mtls/ta-collector-mtls/02-install.yaml create mode 100755 tests/e2e-ta-collector-mtls/ta-collector-mtls/chainsaw-test.yaml create mode 100644 tests/e2e-targetallocator-cr/targetallocator-label/00-assert.yaml create mode 100644 tests/e2e-targetallocator-cr/targetallocator-label/00-install.yaml create mode 100644 tests/e2e-targetallocator-cr/targetallocator-label/01-add-ta-label.yaml create mode 100644 tests/e2e-targetallocator-cr/targetallocator-label/01-assert.yaml create mode 100644 tests/e2e-targetallocator-cr/targetallocator-label/02-assert.yaml create mode 100644 tests/e2e-targetallocator-cr/targetallocator-label/02-change-collector-config.yaml create mode 100644 tests/e2e-targetallocator-cr/targetallocator-label/03-assert.yaml create mode 100755 tests/e2e-targetallocator-cr/targetallocator-label/chainsaw-test.yaml create mode 100644 tests/e2e/additional-containers-collector/00-assert-daemonset-without-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/00-assert-deployment-without-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/00-assert-statefulset-without-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/00-install-collectors-without-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/01-assert-daemonset-with-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/01-assert-deployment-with-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/01-assert-statefulset-with-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/01-install-collectors-with-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/02-assert-daemonset-with-modified-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/02-assert-deployment-with-modified-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/02-assert-statefulset-with-modified-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/02-modify-collectors-additional-containers.yaml create mode 100644 tests/e2e/additional-containers-collector/chainsaw-test.yaml create mode 100644 tests/e2e/affinity-collector/00-assert-daemonset-without-affinity.yaml create mode 100644 tests/e2e/affinity-collector/00-assert-deployment-without-affinity.yaml create mode 100644 tests/e2e/affinity-collector/00-assert-statefulset-without-affinity.yaml create mode 100644 tests/e2e/affinity-collector/00-install-collectors-without-affinity.yaml create mode 100644 tests/e2e/affinity-collector/01-assert-daemonset-with-affinity.yaml create mode 100644 tests/e2e/affinity-collector/01-assert-deployment-with-affinity.yaml create mode 100644 tests/e2e/affinity-collector/01-assert-statefulset-with-affinity.yaml create mode 100644 tests/e2e/affinity-collector/01-install-collectors-with-affinity.yaml create mode 100644 tests/e2e/affinity-collector/02-assert-daemonset-with-modified-affinity.yaml create mode 100644 tests/e2e/affinity-collector/02-assert-deployment-with-modified-affinity.yaml create mode 100644 tests/e2e/affinity-collector/02-assert-statefulset-with-modified-affinity.yaml create mode 100644 tests/e2e/affinity-collector/02-modify-collectors-affinity.yaml create mode 100644 tests/e2e/affinity-collector/chainsaw-test.yaml create mode 100644 tests/e2e/annotation-change-collector/00-assert-daemonset-with-extra-annotation.yaml create mode 100644 tests/e2e/annotation-change-collector/00-assert-deployment-with-extra-annotation.yaml create mode 100644 tests/e2e/annotation-change-collector/00-assert-statefulset-with-extra-annotation.yaml create mode 100644 tests/e2e/annotation-change-collector/00-install-collectors-with-extra-annotation.yaml create mode 100644 tests/e2e/annotation-change-collector/01-assert-daemonset-with-annotation-change.yaml create mode 100644 tests/e2e/annotation-change-collector/01-assert-deployment-with-annotation-change.yaml create mode 100644 tests/e2e/annotation-change-collector/01-assert-statefulset-with-annotation-change.yaml create mode 100644 tests/e2e/annotation-change-collector/01-install-collectors-with-annotation-change.yaml create mode 100644 tests/e2e/annotation-change-collector/02-assert-daemonset-without-extra-annotation.yaml create mode 100644 tests/e2e/annotation-change-collector/02-assert-deployment-without-extra-annotation.yaml create mode 100644 tests/e2e/annotation-change-collector/02-assert-statefulset-without-extra-annotation.yaml create mode 100644 tests/e2e/annotation-change-collector/02-install-collectors-without-extra-annotation.yaml create mode 100644 tests/e2e/annotation-change-collector/02-manual-annotation-resources.yaml create mode 100644 tests/e2e/annotation-change-collector/chainsaw-test.yaml create mode 100644 tests/e2e/args-collector/00-assert-daemonset-without-args.yaml create mode 100644 tests/e2e/args-collector/00-assert-deployment-without-args.yaml create mode 100644 tests/e2e/args-collector/00-assert-statefulset-without-args.yaml create mode 100644 tests/e2e/args-collector/00-install-collectors-without-args.yaml create mode 100644 tests/e2e/args-collector/01-assert-daemonset-with-args.yaml create mode 100644 tests/e2e/args-collector/01-assert-deployment-with-args.yaml create mode 100644 tests/e2e/args-collector/01-assert-statefulset-with-args.yaml create mode 100644 tests/e2e/args-collector/01-install-collectors-with-args.yaml create mode 100644 tests/e2e/args-collector/02-assert-daemonset-with-modified-args.yaml create mode 100644 tests/e2e/args-collector/02-assert-deployment-with-modified-args.yaml create mode 100644 tests/e2e/args-collector/02-assert-statefulset-with-modified-args.yaml create mode 100644 tests/e2e/args-collector/02-modify-collectors-args.yaml create mode 100644 tests/e2e/args-collector/chainsaw-test.yaml create mode 100644 tests/e2e/extension/00-assert.yaml create mode 100644 tests/e2e/extension/00-install.yaml create mode 100644 tests/e2e/extension/chainsaw-test.yaml create mode 100644 tests/e2e/label-change-collector/00-assert-daemonset-with-extra-label.yaml create mode 100644 tests/e2e/label-change-collector/00-assert-deployment-with-extra-label.yaml create mode 100644 tests/e2e/label-change-collector/00-assert-statefulset-with-extra-label.yaml create mode 100644 tests/e2e/label-change-collector/00-install-collectors-with-extra-label.yaml create mode 100644 tests/e2e/label-change-collector/01-assert-daemonset-with-label-change.yaml create mode 100644 tests/e2e/label-change-collector/01-assert-deployment-with-label-change.yaml create mode 100644 tests/e2e/label-change-collector/01-assert-statefulset-with-label-change.yaml create mode 100644 tests/e2e/label-change-collector/01-install-collectors-with-label-change.yaml create mode 100644 tests/e2e/label-change-collector/02-assert-daemonset-without-extra-label.yaml create mode 100644 tests/e2e/label-change-collector/02-assert-deployment-without-extra-label.yaml create mode 100644 tests/e2e/label-change-collector/02-assert-statefulset-without-extra-label.yaml create mode 100644 tests/e2e/label-change-collector/02-install-collectors-without-extra-label.yaml create mode 100644 tests/e2e/label-change-collector/02-manual-labeling-resources.yaml create mode 100644 tests/e2e/label-change-collector/chainsaw-test.yaml create mode 100644 tests/e2e/operator-restart/assert-operator-pod.yaml create mode 100644 tests/e2e/operator-restart/chainsaw-test.yaml create mode 100644 tests/e2e/volume-claim-label/00-assert.yaml create mode 100644 tests/e2e/volume-claim-label/00-install.yaml create mode 100644 tests/e2e/volume-claim-label/01-assert.yaml create mode 100644 tests/e2e/volume-claim-label/01-update-volume-claim-template-labels.yaml create mode 100755 tests/e2e/volume-claim-label/chainsaw-test.yaml diff --git a/.chloggen/3090-enable-multiinstrumentation-by-default.yaml b/.chloggen/3090-enable-multiinstrumentation-by-default.yaml deleted file mode 100755 index 29cbebcef3..0000000000 --- a/.chloggen/3090-enable-multiinstrumentation-by-default.yaml +++ /dev/null @@ -1,30 +0,0 @@ -# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix' -change_type: 'breaking' - -# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action) -component: auto-instrumentation - -# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). -note: Enable multi instrumentation by default. - -# One or more tracking issues related to the change -issues: [3090] - -# (Optional) One or more lines of additional information to render under the primary note. -# These lines will be padded with 2 spaces and then inserted directly into the document. -# Use pipe (|) for multiline entries. -subtext: | - Starting with this release, the OpenTelemetry Operator now enables multi-instrumentation by default. - This enhancement allows instrumentation of multiple containers in a pod with language-specific configurations.| - Key Changes: - - Single Instrumentation (Default Behavior): If no container names are specified using the - `instrumentation.opentelemetry.io/container-names` annotation, instrumentation will be applied to the first container in - the pod spec by default. This only applies when single instrumentation injection is configured. - - Multi-Container Pods: In scenarios where different containers in a pod use distinct technologies, users must specify the - container(s) for instrumentation using language-specific annotations. Without this specification, the default behavior may - not work as expected for multi-container environments. - Compatibility: - - Users already utilizing the `instrumentation.opentelemetry.io/container-names` annotation do not need to take any action. - Their existing setup will continue to function as before. - - Important: Users who attempt to configure both `instrumentation.opentelemetry.io/container-names` and language-specific annotations - (for multi-instrumentation) simultaneously will encounter an error, as this configuration is not supported. diff --git a/.chloggen/3149-add-must-gather.yaml b/.chloggen/3149-add-must-gather.yaml deleted file mode 100755 index d42c553265..0000000000 --- a/.chloggen/3149-add-must-gather.yaml +++ /dev/null @@ -1,25 +0,0 @@ -# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix' -change_type: enhancement - -# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action) -component: auto-instrumentation, collector - -# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). -note: "Add a must gather utility to help troubleshoot" - -# One or more tracking issues related to the change -issues: [3149] - -# (Optional) One or more lines of additional information to render under the primary note. -# These lines will be padded with 2 spaces and then inserted directly into the document. -# Use pipe (|) for multiline entries. -subtext: | - The new utility is available as part of a new container image. - - To use the image in a running OpenShift cluster, you need to run the following command: - - ```sh - oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- /usr/bin/must-gather --operator-namespace opentelemetry-operator-system - ``` - - See the [README](https://github.com/open-telemetry/opentelemetry-operator/blob/main/cmd/gather/README.md) for more details. diff --git a/.chloggen/add_all_receiver_defaults.yaml b/.chloggen/add_all_receiver_defaults.yaml deleted file mode 100755 index e4bb2b6c2b..0000000000 --- a/.chloggen/add_all_receiver_defaults.yaml +++ /dev/null @@ -1,18 +0,0 @@ -# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix' -change_type: enhancement - -# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action) -component: collector - -# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). -note: set default address for all parsed receivers - -# One or more tracking issues related to the change -issues: [3126] - -# (Optional) One or more lines of additional information to render under the primary note. -# These lines will be padded with 2 spaces and then inserted directly into the document. -# Use pipe (|) for multiline entries. -subtext: | - This feature is enabled by default. It can be disabled by specifying - `--feature-gates=-operator.collector.default.config`. diff --git a/.chloggen/fips.yaml b/.chloggen/fips.yaml deleted file mode 100755 index ec572de643..0000000000 --- a/.chloggen/fips.yaml +++ /dev/null @@ -1,19 +0,0 @@ -# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix' -change_type: enhancement - -# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action) -component: collector - -# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). -note: Add flag to disable components when operator runs on FIPS enabled cluster. - -# One or more tracking issues related to the change -issues: [3315] - -# (Optional) One or more lines of additional information to render under the primary note. -# These lines will be padded with 2 spaces and then inserted directly into the document. -# Use pipe (|) for multiline entries. -subtext: | - Flag `--fips-disabled-components=receiver.otlp,exporter.otlp,processor.batch,extension.oidc` can be used to disable - components when operator runs on FIPS enabled cluster. The operator uses `/proc/sys/crypto/fips_enabled` to check - if FIPS is enabled. diff --git a/.chloggen/improve-probe-parsing.yaml b/.chloggen/fix-prometheus-rule-file.yaml similarity index 75% rename from .chloggen/improve-probe-parsing.yaml rename to .chloggen/fix-prometheus-rule-file.yaml index ec9b3fe8c2..28ce057468 100755 --- a/.chloggen/improve-probe-parsing.yaml +++ b/.chloggen/fix-prometheus-rule-file.yaml @@ -1,14 +1,14 @@ # One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix' -change_type: enhancement +change_type: 'bug_fix' # The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action) -component: collector +component: 'github action' # A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). -note: Improves healthcheck parsing capabilities, allowing for future extensions to configure a healthcheck other than the v1 healthcheck extension. +note: Add new line character at the end of PrometheusRule file. # One or more tracking issues related to the change -issues: [3184] +issues: [3503] # (Optional) One or more lines of additional information to render under the primary note. # These lines will be padded with 2 spaces and then inserted directly into the document. diff --git a/.chloggen/remove_localhost_fg.yaml b/.chloggen/remove_localhost_fg.yaml deleted file mode 100755 index 276c3d74b3..0000000000 --- a/.chloggen/remove_localhost_fg.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix' -change_type: breaking - -# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action) -component: collector - -# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). -note: Remove ComponentUseLocalHostAsDefaultHost collector feature gate. - -# One or more tracking issues related to the change -issues: [3306] - -# (Optional) One or more lines of additional information to render under the primary note. -# These lines will be padded with 2 spaces and then inserted directly into the document. -# Use pipe (|) for multiline entries. -subtext: | - This change may break setups where receiver endpoints are not explicitly configured to listen on e.g. 0.0.0.0. - Change \#3333 attempts to address this issue for a known set of components. - The operator performs the adjustment for the following receivers: - - otlp - - skywalking - - jaeger - - loki - - opencensus - - zipkin - - tcplog - - udplog - - fluentforward - - statsd - - awsxray/UDP - - carbon - - collectd - - sapm - - signalfx - - splunk_hec - - wavefront diff --git a/.chloggen/resource-attribute-from-annotations.yaml b/.chloggen/resource-attribute-from-annotations.yaml deleted file mode 100755 index 1ddf782c5d..0000000000 --- a/.chloggen/resource-attribute-from-annotations.yaml +++ /dev/null @@ -1,24 +0,0 @@ -change_type: enhancement - -component: auto-instrumentation - -note: Add support for k8s labels such as app.kubernetes.io/name for resource attributes - -issues: [3112] - -subtext: | - You can opt-in as follows: - ```yaml - apiVersion: opentelemetry.io/v1alpha1 - kind: Instrumentation - metadata: - name: my-instrumentation - spec: - defaults: - useLabelsForResourceAttributes: true - ``` - The following labels are supported: - - `app.kubernetes.io/name` becomes `service.name` - - `app.kubernetes.io/version` becomes `service.version` - - `app.kubernetes.io/part-of` becomes `service.namespace` - - `app.kubernetes.io/instance` becomes `service.instance.id` diff --git a/.chloggen/container-names.yaml b/.chloggen/revert-3379-otel-configmap.yaml similarity index 68% rename from .chloggen/container-names.yaml rename to .chloggen/revert-3379-otel-configmap.yaml index 034d411f8d..bd7b66223c 100755 --- a/.chloggen/container-names.yaml +++ b/.chloggen/revert-3379-otel-configmap.yaml @@ -5,12 +5,15 @@ change_type: bug_fix component: auto-instrumentation # A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). -note: Fix ApacheHttpd, Nginx and SDK injectors to honour their container-names annotations. +note: Reverts PR 3379 which inadvertently broke users setting JAVA_TOOL_OPTIONS # One or more tracking issues related to the change -issues: [3313] +issues: [3463] # (Optional) One or more lines of additional information to render under the primary note. # These lines will be padded with 2 spaces and then inserted directly into the document. # Use pipe (|) for multiline entries. -subtext: This is a breaking change if anyone is accidentally using the enablement flag with container names for these 3 injectors. +subtext: | + Reverts a previous PR which was causing JAVA_TOOL_OPTIONS to not be overriden when + set by users. This was resulting in application crashloopbackoffs for users relying + on java autoinstrumentation. diff --git a/.chloggen/add_receiver_defaults.yaml b/.chloggen/service-extension.yaml similarity index 85% rename from .chloggen/add_receiver_defaults.yaml rename to .chloggen/service-extension.yaml index 7ffaefb2d8..d182754f46 100755 --- a/.chloggen/add_receiver_defaults.yaml +++ b/.chloggen/service-extension.yaml @@ -2,13 +2,13 @@ change_type: enhancement # The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action) -component: operator +component: collector # A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). -note: Use 0.0.0.0 as otlp receiver default address +note: support for creating a service for extensions when ports are specified. # One or more tracking issues related to the change -issues: [3126] +issues: [3460] # (Optional) One or more lines of additional information to render under the primary note. # These lines will be padded with 2 spaces and then inserted directly into the document. diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 209e0fe34b..68f4834a72 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -17,6 +17,3 @@ # AutoInstrumentation owners # TBD - -# Target Allocator owners -cmd/otel-allocator @open-telemetry/operator-ta-maintainers diff --git a/.github/workflows/changelog.yaml b/.github/workflows/changelog.yaml index c13feb754f..0cc293b2e6 100644 --- a/.github/workflows/changelog.yaml +++ b/.github/workflows/changelog.yaml @@ -65,16 +65,3 @@ jobs: run: | make chlog-validate \ || { echo "New ./.chloggen/*.yaml file failed validation."; exit 1; } - - # In order to validate any links in the yaml file, render the config to markdown - - name: Render .chloggen changelog entries - run: make chlog-preview > changelog_preview.md - - name: Install markdown-link-check - run: npm install -g markdown-link-check - - name: Run markdown-link-check - run: | - markdown-link-check \ - --verbose \ - --config .github/workflows/check_links_config.json \ - changelog_preview.md \ - || { echo "Check that anchor links are lowercase"; exit 1; } diff --git a/.github/workflows/continuous-integration.yaml b/.github/workflows/continuous-integration.yaml index 829789c19a..dd0fc335f6 100644 --- a/.github/workflows/continuous-integration.yaml +++ b/.github/workflows/continuous-integration.yaml @@ -62,8 +62,6 @@ jobs: with: path: | /home/runner/.cache/golangci-lint - /home/runner/go/pkg/mod - ./bin key: golangcilint-${{ hashFiles('**/go.sum') }} restore-keys: | golangcilint- diff --git a/.github/workflows/e2e.yaml b/.github/workflows/e2e.yaml index 5bc7aaeeec..64d8839087 100644 --- a/.github/workflows/e2e.yaml +++ b/.github/workflows/e2e.yaml @@ -31,9 +31,11 @@ jobs: - e2e-pdb - e2e-prometheuscr - e2e-targetallocator + - e2e-targetallocator-cr - e2e-upgrade - e2e-multi-instrumentation - e2e-metadata-filters + - e2e-ta-collector-mtls include: - group: e2e-instrumentation setup: "add-instrumentation-params prepare-e2e" @@ -41,8 +43,17 @@ jobs: setup: "add-instrumentation-params prepare-e2e" - group: e2e-metadata-filters setup: "add-operator-arg OPERATOR_ARG='--annotations-filter=.*filter.out --annotations-filter=config.*.gke.io.* --labels-filter=.*filter.out' prepare-e2e" + - group: e2e-ta-collector-mtls + setup: "add-operator-arg OPERATOR_ARG='--feature-gates=operator.targetallocator.mtls' add-certmanager-permissions prepare-e2e" - group: e2e-automatic-rbac setup: "add-rbac-permissions-to-operator prepare-e2e" + - group: e2e-native-sidecar + setup: "add-operator-arg OPERATOR_ARG='--feature-gates=operator.sidecarcontainers.native' prepare-e2e" + kube-version: "1.29" + - group: e2e-targetallocator + setup: "enable-targetallocator-cr prepare-e2e" + - group: e2e-targetallocator-cr + setup: "enable-targetallocator-cr prepare-e2e" steps: - name: Check out code into the Go module directory uses: actions/checkout@v4 @@ -56,8 +67,6 @@ jobs: with: path: bin key: ${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('Makefile') }}-${{ steps.setup-go.outputs.go-version }} - - name: Install chainsaw - uses: kyverno/action-install-chainsaw@v0.2.11 - name: Install tools run: make install-tools - name: Prepare e2e tests diff --git a/.github/workflows/publish-autoinstrumentation-nodejs.yaml b/.github/workflows/publish-autoinstrumentation-nodejs.yaml index 45b368fbf6..7115105b2f 100644 --- a/.github/workflows/publish-autoinstrumentation-nodejs.yaml +++ b/.github/workflows/publish-autoinstrumentation-nodejs.yaml @@ -26,7 +26,7 @@ jobs: - uses: actions/checkout@v4 - name: Read version - run: echo VERSION=$(cat autoinstrumentation/nodejs/package.json | jq -r '.dependencies."@opentelemetry/sdk-node"') >> $GITHUB_ENV + run: echo VERSION=$(cat autoinstrumentation/nodejs/package.json | jq -r '.dependencies."@opentelemetry/auto-instrumentations-node"') >> $GITHUB_ENV - name: Docker meta id: meta @@ -71,7 +71,7 @@ jobs: uses: docker/build-push-action@v6 with: context: autoinstrumentation/nodejs - platforms: linux/amd64,linux/arm64 + platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le push: ${{ github.event_name == 'push' }} build-args: version=${{ env.VERSION }} tags: ${{ steps.meta.outputs.tags }} diff --git a/.github/workflows/reusable-operator-hub-release.yaml b/.github/workflows/reusable-operator-hub-release.yaml index d453b92a93..e9de4190e2 100644 --- a/.github/workflows/reusable-operator-hub-release.yaml +++ b/.github/workflows/reusable-operator-hub-release.yaml @@ -56,7 +56,7 @@ jobs: env: VERSION: ${{ env.version }} run: | - mkdir operators/opentelemetry-operator/${VERSION} + mkdir operators/opentelemetry-operator/${VERSION} cp -R ./tmp/bundle/${{ inputs.folder }}/* operators/opentelemetry-operator/${VERSION} rm -rf ./tmp @@ -73,7 +73,7 @@ jobs: message="Update the opentelemetry to $VERSION" body="Release opentelemetry-operator \`$VERSION\`. - cc @pavolloffay @frzifus @yuriolisa @jaronoff97 @TylerHelmuth @swiatekm + cc @pavolloffay @frzifus @yuriolisa @jaronoff97 @TylerHelmuth @swiatekm @iblancasa " branch="update-opentelemetry-operator-to-${VERSION}" diff --git a/.gitignore b/.gitignore index 1438657894..52b40a6635 100644 --- a/.gitignore +++ b/.gitignore @@ -1,4 +1,3 @@ - # Binaries for programs and plugins *.exe *.exe~ @@ -39,8 +38,9 @@ config/manager/kustomization.yaml kubeconfig tests/_build/ config/rbac/extra-permissions-operator/ +config/rbac/certmanager-permissions/ # autoinstrumentation artifacts build node_modules -package-lock.json \ No newline at end of file +package-lock.json diff --git a/.linkspector.yml b/.linkspector.yml new file mode 100644 index 0000000000..e69de29bb2 diff --git a/CHANGELOG.md b/CHANGELOG.md index c9d919240a..80998b7690 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,289 @@ +## 0.114.0 + +### 💡 Enhancements 💡 + +- `collector`: Create RBAC rules for the k8s_cluster receiver automatically. (#3427) +- `collector`: Create RBAC rules for the k8sobjects receiver automatically. (#3429) +- `collector`: Add a warning message when one created collector needs extra RBAC permissions and the service account doesn't have them. (#3432) +- `target allocator`: Added allocation_fallback_strategy option as fallback strategy for per-node allocation strategy, can be enabled with feature flag operator.targetallocator.fallbackstrategy (#3477) + + If using per-node allocation strategy, targets that are not attached to a node will not + be allocated. As the per-node strategy is required when running as a daemonset, it is + not possible to assign some targets under a daemonset deployment. + Feature flag operator.targetallocator.fallbackstrategy has been added and results in consistent-hashing + being used as the fallback allocation strategy for "per-node" only at this time. + +- `auto-instrumentation`: updated node auto-instrumentation dependencies to the latest version (#3476) + + - auto-instrumentations-node to 0.53.0 + - exporter-metrics-otlp-grpc to 0.55.0 + - exporter-prometheus to 0.55.0 + +- `operator`: Replace references to gcr.io/kubebuilder/kube-rbac-proxy with quay.io/brancz/kube-rbac-proxy (#3485) + +### 🧰 Bug fixes 🧰 + +- `operator`: Operator pod crashed if the Service Monitor for the operator metrics was created before by another operator pod. (#3446) + + Operator fails when the pod is restarted and the Service Monitor for operator metrics was already created by another operator pod. + To fix this, the operator now sets the owner reference on the Service Monitor to itself and checks if the Service Monitor already exists. + +- `auto-instrumentation`: Bump base memory requirements for python and go (#3479) + +### Components + +* [OpenTelemetry Collector - v0.114.0](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.114.0) +* [OpenTelemetry Contrib - v0.114.0](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.114.0) +* [Java auto-instrumentation - v1.33.5](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/tag/v1.33.5) +* [.NET auto-instrumentation - v1.2.0](https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation/releases/tag/v1.2.0) +* [Node.JS - v0.53.0](https://github.com/open-telemetry/opentelemetry-js/releases/tag/experimental%2Fv0.53.0) +* [Python - v0.48b0](https://github.com/open-telemetry/opentelemetry-python-contrib/releases/tag/v0.48b0) +* [Go - v0.17.0-alpha](https://github.com/open-telemetry/opentelemetry-go-instrumentation/releases/tag/v0.17.0-alpha) +* [ApacheHTTPD - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4) +* [Nginx - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4) + +## 0.113.0 + +### 💡 Enhancements 💡 + +- `operator`: Programmatically create the `ServiceMonitor` for the operator metrics endpoint, ensuring correct namespace handling and dynamic configuration. (#3370) + Previously, the `ServiceMonitor` was created statically from a manifest file, causing failures when the + operator was deployed in a non-default namespace. This enhancement ensures automatic adjustment of the + `serverName` and seamless metrics scraping. +- `collector`: Create RBAC rules for the k8s_events receiver automatically. (#3420) +- `collector`: Inject environment K8S_NODE_NAME environment variable for the Kubelet Stats Receiver. (#2779) +- `auto-instrumentation`: add config for installing musl based auto-instrumentation for Python (#2264) +- `auto-instrumentation`: Support `http/json` and `http/protobuf` via OTEL_EXPORTER_OTLP_PROTOCOL environment variable in addition to default `grpc` for exporting traces (#3412) +- `target allocator`: enables support for pulling scrape config and probe CRDs in the target allocator (#1842) + +### 🧰 Bug fixes 🧰 + +- `collector`: Fix mutation of deployments, statefulsets, and daemonsets allowing to remove fields on update (#2947) + +### Components + +* [OpenTelemetry Collector - v0.113.0](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.113.0) +* [OpenTelemetry Contrib - v0.113.0](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.113.0) +* [Java auto-instrumentation - v1.33.5](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/tag/v1.33.5) +* [.NET auto-instrumentation - v1.2.0](https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation/releases/tag/v1.2.0) +* [Node.JS - v0.53.0](https://github.com/open-telemetry/opentelemetry-js/releases/tag/experimental%2Fv0.53.0) +* [Python - v0.48b0](https://github.com/open-telemetry/opentelemetry-python-contrib/releases/tag/v0.48b0) +* [Go - v0.17.0-alpha](https://github.com/open-telemetry/opentelemetry-go-instrumentation/releases/tag/v0.17.0-alpha) +* [ApacheHTTPD - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4) +* [Nginx - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4) + +## 0.112.0 + +### 💡 Enhancements 💡 + +- `auto-instrumentation`: Support configuring Java auto-instrumentation when runtime configuration is provided from configmap or secret. (#1814) + This change allows users to configure JAVA_TOOL_OPTIONS in config map or secret when the name of the variable is defined in the pod spec. + The operator in this case set another JAVA_TOOL_OPTIONS that references the original value + e.g. `JAVA_TOOL_OPTIONS=$(JAVA_TOOL_OPTIONS) -javaagent:/otel-auto-instrumentation-java/javaagent.jar`. + +- `auto-instrumentation`: Adds VolumeClaimTemplate field to Instrumentation spec to enable user-definable ephemeral volumes for auto-instrumentation. (#3267) +- `collector`: Add support for persistentVolumeClaimRetentionPolicy field (#3305) +- `auto-instrumentation`: build musl based auto-instrumentation in Python docker image (#2264) +- `auto-instrumentation`: An empty line should come before the addition of Include ...opentemetry_agent.conf, as a protection measure against cases of httpd.conf w/o a blank last line (#3401) +- `collector`: Add automatic RBAC creation for the `kubeletstats` receiver. (#3155) +- `auto-instrumentation`: Add Nodejs auto-instrumentation image builds for linux/s390x,linux/ppc64le. (#3322) + +### 🧰 Bug fixes 🧰 + +- `target allocator`: Permission check fixed for the serviceaccount of the target allocator (#3380) +- `target allocator`: Change docker image to run as non-root (#3378) + +### Components + +* [OpenTelemetry Collector - v0.112.0](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.112.0) +* [OpenTelemetry Contrib - v0.112.0](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.112.0) +* [Java auto-instrumentation - v1.33.5](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/tag/v1.33.5) +* [.NET auto-instrumentation - v1.2.0](https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation/releases/tag/v1.2.0) +* [Node.JS - v0.53.0](https://github.com/open-telemetry/opentelemetry-js/releases/tag/experimental%2Fv0.53.0) +* [Python - v0.48b0](https://github.com/open-telemetry/opentelemetry-python-contrib/releases/tag/v0.48b0) +* [Go - v0.15.0-alpha](https://github.com/open-telemetry/opentelemetry-go-instrumentation/releases/tag/v0.15.0-alpha) +* [ApacheHTTPD - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4) +* [Nginx - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4) + +## 0.111.0 + +### 💡 Enhancements 💡 + +- `auto-instrumentation`: set OTEL_LOGS_EXPORTER env var to otlp in python instrumentation (#3330) + +- `collector`: Expose the Collector telemetry endpoint by default. (#3361) + + The collector v0.111.0 changes the default binding of the telemetry metrics endpoint from `0.0.0.0` to `localhost`. + To avoid any disruption we fallback to `0.0.0.0:{PORT}` as default address. + Details can be found here: [opentelemetry-collector#11251](https://github.com/open-telemetry/opentelemetry-collector/pull/11251) + + +- `auto-instrumentation`: Add support for specifying exporter TLS certificates in auto-instrumentation. (#3338) + + Now Instrumentation CR supports specifying TLS certificates for exporter: + ```yaml + spec: + exporter: + endpoint: https://otel-collector:4317 + tls: + secretName: otel-tls-certs + configMapName: otel-ca-bundle + # otel-ca-bundle + ca_file: ca.crt + # present in otel-tls-certs + cert_file: tls.crt + # present in otel-tls-certs + key_file: tls.key + ``` + + * Propagating secrets across namespaces can be done with https://github.com/EmberStack/kubernetes-reflector or https://github.com/zakkg3/ClusterSecret + * Restarting workloads on certificate renewal can be done with https://github.com/stakater/Reloader or https://github.com/wave-k8s/wave + +- `collector`: Add native sidecar injection behind a feature gate which is disabled by default. (#2376) + + Native sidecars are supported since Kubernetes version `1.28` and are availabe by default since `1.29`. + To use native sidecars on Kubernetes v1.28 make sure the "SidecarContainers" feature gate on kubernetes is enabled. + If native sidecars are available, the operator can be advised to use them by adding + the `--feature-gates=operator.sidecarcontainers.native` to the Operator args. + In the future this may will become availabe as deployment mode on the Collector CR. See [#3356](https://github.com/open-telemetry/opentelemetry-operator/issues/3356) + +- `target allocator, collector`: Enable mTLS between the TA and collector for passing secrets in the scrape_config securely (#1669) + + This change enables mTLS between the collector and the target allocator (requires cert-manager). + This is necessary for passing secrets securely from the TA to the collector for scraping endpoints that have authentication. Use the `operator.targetallocator.mtls` to enable this feature. See the target allocator [documentation](https://github.com/open-telemetry/opentelemetry-operator/tree/main/cmd/otel-allocator#service--pod-monitor-endpoint-credentials) for more details. + +### 🧰 Bug fixes 🧰 + +- `collector-webhook`: Fixed validation of `stabilizationWindowSeconds` in autoscaler behaviour (#3345) + + The validation of `stabilizationWindowSeconds` in the `autoscaler.behaviour.scale[Up|Down]` incorrectly rejected 0 as an invalid value. + This has been fixed to ensure that the value is validated correctly (should be >=0 and <=3600) and the error messsage has been updated to reflect this. + +### Components + +* [OpenTelemetry Collector - v0.111.0](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.111.0) +* [OpenTelemetry Contrib - v0.111.0](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.111.0) +* [Java auto-instrumentation - v1.33.5](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/tag/v1.33.5) +* [.NET auto-instrumentation - v1.2.0](https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation/releases/tag/v1.2.0) +* [Node.JS - v0.53.0](https://github.com/open-telemetry/opentelemetry-js/releases/tag/experimental%2Fv0.53.0) +* [Python - v0.48b0](https://github.com/open-telemetry/opentelemetry-python-contrib/releases/tag/v0.48b0) +* [Go - v0.15.0-alpha](https://github.com/open-telemetry/opentelemetry-go-instrumentation/releases/tag/v0.15.0-alpha) +* [ApacheHTTPD - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4) +* [Nginx - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4) + + +## 0.110.0 + +### 🛑 Breaking changes 🛑 + +- `auto-instrumentation`: Enable multi instrumentation by default. (#3090) + + Starting with this release, the OpenTelemetry Operator now enables multi-instrumentation by default. + This enhancement allows instrumentation of multiple containers in a pod with language-specific configurations. + + Key Changes: + - Single Instrumentation (Default Behavior): If no container names are specified using the + `instrumentation.opentelemetry.io/container-names` annotation, instrumentation will be applied to the first container in + the pod spec by default. This only applies when single instrumentation injection is configured. + - Multi-Container Pods: In scenarios where different containers in a pod use distinct technologies, users must specify the + container(s) for instrumentation using language-specific annotations. Without this specification, the default behavior may + not work as expected for multi-container environments. + + Compatibility: + - Users already utilizing the `instrumentation.opentelemetry.io/container-names` annotation do not need to take any action. + Their existing setup will continue to function as before. + - Important: Users who attempt to configure both `instrumentation.opentelemetry.io/container-names` and language-specific annotations + (for multi-instrumentation) simultaneously will encounter an error, as this configuration is not supported. + +- `collector`: Remove ComponentUseLocalHostAsDefaultHost collector feature gate. (#3306) + + This change may break setups where receiver endpoints are not explicitly configured to listen on e.g. 0.0.0.0. + Change \#3333 attempts to address this issue for a known set of components. + The operator performs the adjustment for the following receivers: + - otlp + - skywalking + - jaeger + - loki + - opencensus + - zipkin + - tcplog + - udplog + - fluentforward + - statsd + - awsxray/UDP + - carbon + - collectd + - sapm + - signalfx + - splunk_hec + - wavefront + + +### 💡 Enhancements 💡 + +- `auto-instrumentation, collector`: Add a must gather utility to help troubleshoot (#3149) + + The new utility is available as part of a new container image. + + To use the image in a running OpenShift cluster, you need to run the following command: + + ```sh + oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- /usr/bin/must-gather --operator-namespace opentelemetry-operator-system + ``` + + See the [README](https://github.com/open-telemetry/opentelemetry-operator/blob/main/cmd/gather/README.md) for more details. + +- `collector`: set default address for all parsed receivers (#3126) + + This feature is enabled by default. It can be disabled by specifying + `--feature-gates=-operator.collector.default.config`. +- `operator`: Use 0.0.0.0 as otlp receiver default address (#3126) +- `collector`: Add flag to disable components when operator runs on FIPS enabled cluster. (#3315) + Flag `--fips-disabled-components=receiver.otlp,exporter.otlp,processor.batch,extension.oidc` can be used to disable + components when operator runs on FIPS enabled cluster. The operator uses `/proc/sys/crypto/fips_enabled` to check + if FIPS is enabled. + +- `collector`: Improves healthcheck parsing capabilities, allowing for future extensions to configure a healthcheck other than the v1 healthcheck extension. (#3184) +- `auto-instrumentation`: Add support for k8s labels such as app.kubernetes.io/name for resource attributes (#3112) + + You can opt-in as follows: + ```yaml + apiVersion: opentelemetry.io/v1alpha1 + kind: Instrumentation + metadata: + name: my-instrumentation + spec: + defaults: + useLabelsForResourceAttributes: true + ``` + The following labels are supported: + - `app.kubernetes.io/name` becomes `service.name` + - `app.kubernetes.io/version` becomes `service.version` + - `app.kubernetes.io/part-of` becomes `service.namespace` + - `app.kubernetes.io/instance` becomes `service.instance.id` + + +### 🧰 Bug fixes 🧰 + +- `auto-instrumentation`: Fix ApacheHttpd, Nginx and SDK injectors to honour their container-names annotations. (#3313) + + This is a breaking change if anyone is accidentally using the enablement flag with container names for these 3 injectors. + +### Components + +* [OpenTelemetry Collector - v0.110.0](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.110.0) +* [OpenTelemetry Contrib - v0.110.0](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.110.0) +* [Java auto-instrumentation - v1.33.5](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/tag/v1.33.5) +* [.NET auto-instrumentation - v1.2.0](https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation/releases/tag/v1.2.0) +* [Node.JS - v0.52.1](https://github.com/open-telemetry/opentelemetry-js/releases/tag/experimental%2Fv0.52.1) +* [Python - v0.48b0](https://github.com/open-telemetry/opentelemetry-python-contrib/releases/tag/v0.48b0) +* [Go - v0.14.0-alpha](https://github.com/open-telemetry/opentelemetry-go-instrumentation/releases/tag/v0.14.0-alpha) +* [ApacheHTTPD - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4) +* [Nginx - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4) + ## 0.109.0 ### 🚩 Deprecations 🚩 diff --git a/Makefile b/Makefile index 939af881d5..8212a91dd7 100644 --- a/Makefile +++ b/Makefile @@ -204,11 +204,28 @@ add-image-opampbridge: add-rbac-permissions-to-operator: manifests kustomize # Kustomize only allows patches in the folder where the kustomization is located # This folder is ignored by .gitignore - cp -r tests/e2e-automatic-rbac/extra-permissions-operator/ config/rbac/extra-permissions-operator + mkdir -p config/rbac/extra-permissions-operator + cp -r tests/e2e-automatic-rbac/extra-permissions-operator/* config/rbac/extra-permissions-operator + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/clusterresourcequotas.yaml + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/cronjobs.yaml + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/daemonsets.yaml + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/events.yaml + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/extensions.yaml cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/namespaces.yaml + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/namespaces-status.yaml cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/nodes.yaml + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/nodes-proxy.yaml + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/nodes-spec.yaml + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/pod-status.yaml cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/rbac.yaml cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/replicaset.yaml + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/replicationcontrollers.yaml + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/resourcequotas.yaml + +.PHONY: enable-targetallocator-cr +enable-targetallocator-cr: + @$(MAKE) add-operator-arg OPERATOR_ARG='--feature-gates=operator.collector.targetallocatorcr' + cd config/crd && $(KUSTOMIZE) edit add resource bases/opentelemetry.io_targetallocators.yaml # Deploy controller in the current Kubernetes context, configured in ~/.kube/config .PHONY: deploy @@ -267,6 +284,13 @@ generate: controller-gen e2e: chainsaw $(CHAINSAW) test --test-dir ./tests/e2e +# e2e-native-sidecar +# NOTE: make sure the k8s featuregate "SidecarContainers" is set to true. +# NOTE: make sure the operator featuregate "operator.sidecarcontainers.native" is enabled. +.PHONY: e2e-native-sidecar +e2e-native-sidecar: chainsaw + $(CHAINSAW) test --test-dir ./tests/e2e-native-sidecar + # end-to-end-test for testing automatic RBAC creation .PHONY: e2e-automatic-rbac e2e-automatic-rbac: chainsaw @@ -312,6 +336,23 @@ e2e-prometheuscr: chainsaw e2e-targetallocator: chainsaw $(CHAINSAW) test --test-dir ./tests/e2e-targetallocator +# Target allocator CR end-to-tests +.PHONY: e2e-targetallocator-cr +e2e-targetallocator-cr: chainsaw + $(CHAINSAW) test --test-dir ./tests/e2e-targetallocator-cr + +.PHONY: add-certmanager-permissions +add-certmanager-permissions: + # Kustomize only allows patches in the folder where the kustomization is located + # This folder is ignored by .gitignore + cp -r tests/e2e-ta-collector-mtls/certmanager-permissions config/rbac/certmanager-permissions + cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path certmanager-permissions/certmanager.yaml + +# Target allocator collector mTLS end-to-tests +.PHONY: e2e-ta-collector-mtls +e2e-ta-collector-mtls: chainsaw + $(CHAINSAW) test --test-dir ./tests/e2e-ta-collector-mtls + # end-to-end-test for Annotations/Labels Filters .PHONY: e2e-metadata-filters e2e-metadata-filters: chainsaw @@ -454,7 +495,7 @@ KUSTOMIZE_VERSION ?= v5.0.3 CONTROLLER_TOOLS_VERSION ?= v0.16.1 GOLANGCI_LINT_VERSION ?= v1.57.2 KIND_VERSION ?= v0.20.0 -CHAINSAW_VERSION ?= v0.2.5 +CHAINSAW_VERSION ?= v0.2.8 .PHONY: install-tools install-tools: kustomize golangci-lint kind controller-gen envtest crdoc kind operator-sdk chainsaw @@ -474,12 +515,12 @@ kind: ## Download kind locally if necessary. .PHONY: controller-gen controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if necessary. $(CONTROLLER_GEN): $(LOCALBIN) - @test -s $(LOCALBIN)/controller-gen && $(LOCALBIN)/controller-gen --version | grep -q $(CONTROLLER_TOOLS_VERSION) || GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-tools/cmd/controller-gen@$(CONTROLLER_TOOLS_VERSION) + $(call go-get-tool,$(CONTROLLER_GEN), sigs.k8s.io/controller-tools/cmd/controller-gen,$(CONTROLLER_TOOLS_VERSION)) .PHONY: envtest envtest: $(ENVTEST) ## Download envtest-setup locally if necessary. $(ENVTEST): $(LOCALBIN) - @test -s $(LOCALBIN)/setup-envtest || GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest + $(call go-get-tool,$(ENVTEST), sigs.k8s.io/controller-runtime/tools/setup-envtest,latest) CRDOC = $(shell pwd)/bin/crdoc .PHONY: crdoc diff --git a/README.md b/README.md index f3485d7fac..6244ab90cf 100644 --- a/README.md +++ b/README.md @@ -11,6 +11,7 @@ The operator manages: ## Documentation +- [Compatibility & Support docs](./docs/compatibility.md) - [API docs](./docs/api.md) - [Offical Telemetry Operator page](https://opentelemetry.io/docs/kubernetes/operator/) @@ -291,9 +292,12 @@ instrumentation.opentelemetry.io/inject-nodejs: "true" ``` Python: +Python auto-instrumentation also honors an annotation that will permit it to run it on images with a different C library than glibc. ```bash instrumentation.opentelemetry.io/inject-python: "true" +instrumentation.opentelemetry.io/otel-python-platform: "glibc" # for Linux glibc based images, this is the default value and can be omitted +instrumentation.opentelemetry.io/otel-python-platform: "musl" # for Linux musl based images ``` .NET: @@ -608,7 +612,7 @@ spec: mode: statefulset targetAllocator: enabled: true - config: + config: receivers: prometheus: config: @@ -740,7 +744,7 @@ spec: ### Configure resource attributes with labels -You can also use common labels to set resource attributes. +You can also use common labels to set resource attributes. The following labels are supported: - `app.kubernetes.io/name` becomes `service.name` @@ -782,62 +786,14 @@ The priority for setting resource attributes is as follows (first found wins): 1. Resource attributes set via `OTEL_RESOURCE_ATTRIBUTES` and `OTEL_SERVICE_NAME` environment variables 2. Resource attributes set via annotations (with the `resource.opentelemetry.io/` prefix) -3. Resource attributes set via labels (e.g. `app.kubernetes.io/name`) +3. Resource attributes set via labels (e.g. `app.kubernetes.io/name`) if the `Instrumentation` CR has defaults.useLabelsForResourceAttributes=true (see above) 4. Resource attributes calculated from the pod's metadata (e.g. `k8s.pod.name`) 5. Resource attributes set via the `Instrumentation` CR (in the `spec.resource.resourceAttributes` section) -This priority is applied for each resource attribute separately, so it is possible to set some attributes via +This priority is applied for each resource attribute separately, so it is possible to set some attributes via annotations and others via labels. -## Compatibility matrix - -### OpenTelemetry Operator vs. OpenTelemetry Collector - -The OpenTelemetry Operator follows the same versioning as the operand (OpenTelemetry Collector) up to the minor part of the version. For example, the OpenTelemetry Operator v0.18.1 tracks OpenTelemetry Collector 0.18.0. The patch part of the version indicates the patch level of the operator itself, not that of OpenTelemetry Collector. Whenever a new patch version is released for OpenTelemetry Collector, we'll release a new patch version of the operator. - -By default, the OpenTelemetry Operator ensures consistent versioning between itself and the managed `OpenTelemetryCollector` resources. That is, if the OpenTelemetry Operator is based on version `0.40.0`, it will create resources with an underlying OpenTelemetry Collector at version `0.40.0`. - -When a custom `Spec.Image` is used with an `OpenTelemetryCollector` resource, the OpenTelemetry Operator will not manage this versioning and upgrading. In this scenario, it is best practice that the OpenTelemetry Operator version should match the underlying core version. Given a `OpenTelemetryCollector` resource with a `Spec.Image` configured to a custom image based on underlying OpenTelemetry Collector at version `0.40.0`, it is recommended that the OpenTelemetry Operator is kept at version `0.40.0`. - -### OpenTelemetry Operator vs. Kubernetes vs. Cert Manager vs Prometheus Operator - -We strive to be compatible with the widest range of Kubernetes versions as possible, but some changes to Kubernetes itself require us to break compatibility with older Kubernetes versions, be it because of code incompatibilities, or in the name of maintainability. Every released operator will support a specific range of Kubernetes versions, to be determined at the latest during the release. - -We use `cert-manager` for some features of this operator and the third column shows the versions of the `cert-manager` that are known to work with this operator's versions. - -The Target Allocator supports prometheus-operator CRDs like ServiceMonitor, and it does so by using packages imported from prometheus-operator itself. The table shows which version is shipped with a given operator version. -Generally speaking, these are backwards compatible, but specific features require the appropriate package versions. - -The OpenTelemetry Operator _might_ work on versions outside of the given range, but when opening new issues, please make sure to test your scenario on a supported version. - -| OpenTelemetry Operator | Kubernetes | Cert-Manager | Prometheus-Operator | -|------------------------|----------------| ------------ |---------------------| -| v0.109.0 | v1.23 to v1.31 | v1 | v0.76.0 | -| v0.108.0 | v1.23 to v1.31 | v1 | v0.76.0 | -| v0.107.0 | v1.23 to v1.30 | v1 | v0.75.0 | -| v0.106.0 | v1.23 to v1.30 | v1 | v0.75.0 | -| v0.105.0 | v1.23 to v1.30 | v1 | v0.74.0 | -| v0.104.0 | v1.23 to v1.30 | v1 | v0.74.0 | -| v0.103.0 | v1.23 to v1.30 | v1 | v0.74.0 | -| v0.102.0 | v1.23 to v1.30 | v1 | v0.71.2 | -| v0.101.0 | v1.23 to v1.30 | v1 | v0.71.2 | -| v0.100.0 | v1.23 to v1.29 | v1 | v0.71.2 | -| v0.99.0 | v1.23 to v1.29 | v1 | v0.71.2 | -| v0.98.0 | v1.23 to v1.29 | v1 | v0.71.2 | -| v0.97.0 | v1.23 to v1.29 | v1 | v0.71.2 | -| v0.96.0 | v1.23 to v1.29 | v1 | v0.71.2 | -| v0.95.0 | v1.23 to v1.29 | v1 | v0.71.2 | -| v0.94.0 | v1.23 to v1.29 | v1 | v0.71.0 | -| v0.93.0 | v1.23 to v1.29 | v1 | v0.71.0 | -| v0.92.0 | v1.23 to v1.29 | v1 | v0.71.0 | -| v0.91.0 | v1.23 to v1.29 | v1 | v0.70.0 | -| v0.90.0 | v1.23 to v1.28 | v1 | v0.69.1 | -| v0.89.0 | v1.23 to v1.28 | v1 | v0.69.1 | -| v0.88.0 | v1.23 to v1.28 | v1 | v0.68.0 | -| v0.87.0 | v1.23 to v1.28 | v1 | v0.68.0 | -| v0.86.0 | v1.23 to v1.28 | v1 | v0.68.0 | - ## Contributing and Developing Please see [CONTRIBUTING.md](CONTRIBUTING.md). @@ -849,6 +805,7 @@ Approvers ([@open-telemetry/operator-approvers](https://github.com/orgs/open-tel - [Benedikt Bongartz](https://github.com/frzifus), Red Hat - [Tyler Helmuth](https://github.com/TylerHelmuth), Honeycomb - [Yuri Oliveira Sa](https://github.com/yuriolisa), Red Hat +- [Israel Blancas](https://github.com/iblancasa), Red Hat Emeritus Approvers: @@ -859,15 +816,6 @@ Emeritus Approvers: - [Owais Lone](https://github.com/owais), Splunk - [Pablo Baeyens](https://github.com/mx-psi), DataDog -Target Allocator Maintainers ([@open-telemetry/operator-ta-maintainers](https://github.com/orgs/open-telemetry/teams/operator-ta-maintainers)): - -- [Kristina Pathak](https://github.com/kristinapathak), Lightstep -- [Sebastian Poxhofer](https://github.com/secustor) - -Emeritus Target Allocator Maintainers - -- [Anthony Mirabella](https://github.com/Aneurysm9), AWS - Maintainers ([@open-telemetry/operator-maintainers](https://github.com/orgs/open-telemetry/teams/operator-maintainers)): - [Jacob Aronoff](https://github.com/jaronoff97), Lightstep diff --git a/RELEASE.md b/RELEASE.md index 1fecc1e997..c0f6e29e0e 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -12,7 +12,7 @@ Steps to release a new version of the OpenTelemetry Operator: > DO NOT BUMP JAVA PAST `1.X.X` AND DO NOT BUMP .NET PAST `1.2.0`. Upgrades past these versions will introduce breaking HTTP semantic convention changes. 1. Check if the compatible OpenShift versions are updated in the `Makefile`. 1. Update the bundle by running `make bundle VERSION=$VERSION`. - 1. Change the compatibility matrix in the [readme](./README.md) file, using the OpenTelemetry Operator version to be released and the current latest Kubernetes version as the latest supported version. Remove the oldest entry. + 1. Change the compatibility matrix in the [compatibility doc](./docs/compatibility.md) file, using the OpenTelemetry Operator version to be released and the current latest Kubernetes version as the latest supported version. Remove the oldest entry. 1. Update release schedule table, by moving the current release manager to the end of the table with updated release version. 1. Add the changes to the changelog by running `make chlog-update VERSION=$VERSION`. 1. Check the OpenTelemetry Collector's changelog and ensure migration steps are present in `pkg/collector/upgrade` @@ -44,9 +44,10 @@ The operator should be released within a week after the [OpenTelemetry collector | Version | Release manager | |----------|-----------------| -| v0.110.0 | @swiatekm | -| v0.111.0 | @frzifus | -| v0.112.0 | @yuriolisa | -| v0.113.0 | @pavolloffay | -| v0.114.0 | @TylerHelmuth | -| v0.115.0 | @jaronoff97 | \ No newline at end of file +| v0.115.0 | @TylerHelmuth | +| v0.116.0 | @jaronoff97 | +| v0.117.0 | @iblancasa | +| v0.118.0 | @frzifus | +| v0.119.0 | @yuriolisa | +| v0.120.0 | @pavolloffay | +| v0.121.0 | @swiatekm | diff --git a/apis/v1alpha1/instrumentation_types.go b/apis/v1alpha1/instrumentation_types.go index 2cccef7d6b..e290f4033b 100644 --- a/apis/v1alpha1/instrumentation_types.go +++ b/apis/v1alpha1/instrumentation_types.go @@ -97,8 +97,37 @@ type Resource struct { // Exporter defines OTLP exporter configuration. type Exporter struct { // Endpoint is address of the collector with OTLP endpoint. + // If the endpoint defines https:// scheme TLS has to be specified. // +optional Endpoint string `json:"endpoint,omitempty"` + + // TLS defines certificates for TLS. + // TLS needs to be enabled by specifying https:// scheme in the Endpoint. + TLS *TLS `json:"tls,omitempty"` +} + +// TLS defines TLS configuration for exporter. +type TLS struct { + // SecretName defines secret name that will be used to configure TLS on the exporter. + // It is user responsibility to create the secret in the namespace of the workload. + // The secret must contain client certificate (Cert) and private key (Key). + // The CA certificate might be defined in the secret or in the config map. + SecretName string `json:"secretName,omitempty"` + + // ConfigMapName defines configmap name with CA certificate. If it is not defined CA certificate will be + // used from the secret defined in SecretName. + ConfigMapName string `json:"configMapName,omitempty"` + + // CA defines the key of certificate (e.g. ca.crt) in the configmap map, secret or absolute path to a certificate. + // The absolute path can be used when certificate is already present on the workload filesystem e.g. + // /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt + CA string `json:"ca_file,omitempty"` + // Cert defines the key (e.g. tls.crt) of the client certificate in the secret or absolute path to a certificate. + // The absolute path can be used when certificate is already present on the workload filesystem. + Cert string `json:"cert_file,omitempty"` + // Key defines a key (e.g. tls.key) of the private key in the secret or absolute path to a certificate. + // The absolute path can be used when certificate is already present on the workload filesystem. + Key string `json:"key_file,omitempty"` } // Sampler defines sampling configuration. @@ -133,6 +162,10 @@ type Java struct { // +optional Image string `json:"image,omitempty"` + // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. + // If omitted, an emptyDir is used with size limit VolumeSizeLimit + VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"` + // VolumeSizeLimit defines size limit for volume used for auto-instrumentation. // The default size is 200Mi. VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"` @@ -167,6 +200,10 @@ type NodeJS struct { // +optional Image string `json:"image,omitempty"` + // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. + // If omitted, an emptyDir is used with size limit VolumeSizeLimit + VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"` + // VolumeSizeLimit defines size limit for volume used for auto-instrumentation. // The default size is 200Mi. VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"` @@ -188,6 +225,10 @@ type Python struct { // +optional Image string `json:"image,omitempty"` + // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. + // If omitted, an emptyDir is used with size limit VolumeSizeLimit + VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"` + // VolumeSizeLimit defines size limit for volume used for auto-instrumentation. // The default size is 200Mi. VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"` @@ -209,6 +250,10 @@ type DotNet struct { // +optional Image string `json:"image,omitempty"` + // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. + // If omitted, an emptyDir is used with size limit VolumeSizeLimit + VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"` + // VolumeSizeLimit defines size limit for volume used for auto-instrumentation. // The default size is 200Mi. VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"` @@ -228,6 +273,10 @@ type Go struct { // +optional Image string `json:"image,omitempty"` + // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. + // If omitted, an emptyDir is used with size limit VolumeSizeLimit + VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"` + // VolumeSizeLimit defines size limit for volume used for auto-instrumentation. // The default size is 200Mi. VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"` @@ -249,6 +298,10 @@ type ApacheHttpd struct { // +optional Image string `json:"image,omitempty"` + // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. + // If omitted, an emptyDir is used with size limit VolumeSizeLimit + VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"` + // VolumeSizeLimit defines size limit for volume used for auto-instrumentation. // The default size is 200Mi. VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"` @@ -285,6 +338,10 @@ type Nginx struct { // +optional Image string `json:"image,omitempty"` + // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. + // If omitted, an emptyDir is used with size limit VolumeSizeLimit + VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"` + // VolumeSizeLimit defines size limit for volume used for auto-instrumentation. // The default size is 200Mi. VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"` diff --git a/apis/v1alpha1/instrumentation_webhook.go b/apis/v1alpha1/instrumentation_webhook.go index 004992f795..b4aae51c56 100644 --- a/apis/v1alpha1/instrumentation_webhook.go +++ b/apis/v1alpha1/instrumentation_webhook.go @@ -17,6 +17,7 @@ package v1alpha1 import ( "context" "fmt" + "reflect" "strconv" "strings" @@ -127,13 +128,13 @@ func (w InstrumentationWebhook) defaulter(r *Instrumentation) error { if r.Spec.Python.Resources.Limits == nil { r.Spec.Python.Resources.Limits = corev1.ResourceList{ corev1.ResourceCPU: resource.MustParse("500m"), - corev1.ResourceMemory: resource.MustParse("32Mi"), + corev1.ResourceMemory: resource.MustParse("64Mi"), } } if r.Spec.Python.Resources.Requests == nil { r.Spec.Python.Resources.Requests = corev1.ResourceList{ corev1.ResourceCPU: resource.MustParse("50m"), - corev1.ResourceMemory: resource.MustParse("32Mi"), + corev1.ResourceMemory: resource.MustParse("64Mi"), } } if r.Spec.DotNet.Image == "" { @@ -157,13 +158,13 @@ func (w InstrumentationWebhook) defaulter(r *Instrumentation) error { if r.Spec.Go.Resources.Limits == nil { r.Spec.Go.Resources.Limits = corev1.ResourceList{ corev1.ResourceCPU: resource.MustParse("500m"), - corev1.ResourceMemory: resource.MustParse("32Mi"), + corev1.ResourceMemory: resource.MustParse("64Mi"), } } if r.Spec.Go.Resources.Requests == nil { r.Spec.Go.Resources.Requests = corev1.ResourceList{ corev1.ResourceCPU: resource.MustParse("50m"), - corev1.ResourceMemory: resource.MustParse("32Mi"), + corev1.ResourceMemory: resource.MustParse("64Mi"), } } if r.Spec.ApacheHttpd.Image == "" { @@ -236,9 +237,61 @@ func (w InstrumentationWebhook) validate(r *Instrumentation) (admission.Warnings default: return warnings, fmt.Errorf("spec.sampler.type is not valid: %s", r.Spec.Sampler.Type) } + + var err error + err = validateInstrVolume(r.Spec.ApacheHttpd.VolumeClaimTemplate, r.Spec.ApacheHttpd.VolumeSizeLimit) + if err != nil { + return warnings, fmt.Errorf("spec.apachehttpd.volumeClaimTemplate and spec.apachehttpd.volumeSizeLimit cannot both be defined: %w", err) + } + err = validateInstrVolume(r.Spec.DotNet.VolumeClaimTemplate, r.Spec.DotNet.VolumeSizeLimit) + if err != nil { + return warnings, fmt.Errorf("spec.dotnet.volumeClaimTemplate and spec.dotnet.volumeSizeLimit cannot both be defined: %w", err) + } + err = validateInstrVolume(r.Spec.Go.VolumeClaimTemplate, r.Spec.Go.VolumeSizeLimit) + if err != nil { + return warnings, fmt.Errorf("spec.go.volumeClaimTemplate and spec.go.volumeSizeLimit cannot both be defined: %w", err) + } + err = validateInstrVolume(r.Spec.Java.VolumeClaimTemplate, r.Spec.Java.VolumeSizeLimit) + if err != nil { + return warnings, fmt.Errorf("spec.java.volumeClaimTemplate and spec.java.volumeSizeLimit cannot both be defined: %w", err) + } + err = validateInstrVolume(r.Spec.Nginx.VolumeClaimTemplate, r.Spec.Nginx.VolumeSizeLimit) + if err != nil { + return warnings, fmt.Errorf("spec.nginx.volumeClaimTemplate and spec.nginx.volumeSizeLimit cannot both be defined: %w", err) + } + err = validateInstrVolume(r.Spec.NodeJS.VolumeClaimTemplate, r.Spec.NodeJS.VolumeSizeLimit) + if err != nil { + return warnings, fmt.Errorf("spec.nodejs.volumeClaimTemplate and spec.nodejs.volumeSizeLimit cannot both be defined: %w", err) + } + err = validateInstrVolume(r.Spec.Python.VolumeClaimTemplate, r.Spec.Python.VolumeSizeLimit) + if err != nil { + return warnings, fmt.Errorf("spec.python.volumeClaimTemplate and spec.python.volumeSizeLimit cannot both be defined: %w", err) + } + + warnings = append(warnings, validateExporter(r.Spec.Exporter)...) + return warnings, nil } +func validateExporter(exporter Exporter) []string { + var warnings []string + if exporter.TLS != nil { + tls := exporter.TLS + if tls.Key != "" && tls.Cert == "" || tls.Cert != "" && tls.Key == "" { + warnings = append(warnings, "both exporter.tls.key and exporter.tls.cert mut be set") + } + + if !strings.HasPrefix(exporter.Endpoint, "https://") { + warnings = append(warnings, "exporter.tls is configured but exporter.endpoint is not enabling TLS with https://") + } + } + if strings.HasPrefix(exporter.Endpoint, "https://") && exporter.TLS == nil { + warnings = append(warnings, "exporter is using https:// but exporter.tls is unset") + } + + return warnings +} + func validateJaegerRemoteSamplerArgument(argument string) error { parts := strings.Split(argument, ",") @@ -270,6 +323,13 @@ func validateJaegerRemoteSamplerArgument(argument string) error { return nil } +func validateInstrVolume(volumeClaimTemplate corev1.PersistentVolumeClaimTemplate, volumeSizeLimit *resource.Quantity) error { + if !reflect.ValueOf(volumeClaimTemplate).IsZero() && volumeSizeLimit != nil { + return fmt.Errorf("unable to resolve volume size") + } + return nil +} + func NewInstrumentationWebhook(logger logr.Logger, scheme *runtime.Scheme, cfg config.Config) *InstrumentationWebhook { return &InstrumentationWebhook{ logger: logger, diff --git a/apis/v1alpha1/instrumentation_webhook_test.go b/apis/v1alpha1/instrumentation_webhook_test.go index 81049cbc0c..f1089215aa 100644 --- a/apis/v1alpha1/instrumentation_webhook_test.go +++ b/apis/v1alpha1/instrumentation_webhook_test.go @@ -19,11 +19,15 @@ import ( "testing" "github.com/stretchr/testify/assert" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" "sigs.k8s.io/controller-runtime/pkg/webhook/admission" "github.com/open-telemetry/opentelemetry-operator/internal/config" ) +var defaultVolumeSize = resource.MustParse("200Mi") + func TestInstrumentationDefaultingWebhook(t *testing.T) { inst := &Instrumentation{} err := InstrumentationWebhook{ @@ -113,6 +117,111 @@ func TestInstrumentationValidatingWebhook(t *testing.T) { }, }, }, + { + name: "with volume and volumeSizeLimit", + err: "spec.nodejs.volumeClaimTemplate and spec.nodejs.volumeSizeLimit cannot both be defined", + inst: Instrumentation{ + Spec: InstrumentationSpec{ + NodeJS: NodeJS{ + VolumeClaimTemplate: corev1.PersistentVolumeClaimTemplate{ + Spec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + }, + }, + VolumeSizeLimit: &defaultVolumeSize, + }, + }, + }, + warnings: []string{"sampler type not set"}, + }, + { + name: "exporter: tls cert set but missing key", + inst: Instrumentation{ + Spec: InstrumentationSpec{ + Sampler: Sampler{ + Type: ParentBasedTraceIDRatio, + Argument: "0.99", + }, + Exporter: Exporter{ + Endpoint: "https://collector:4317", + TLS: &TLS{ + Cert: "cert", + }, + }, + }, + }, + warnings: []string{"both exporter.tls.key and exporter.tls.cert mut be set"}, + }, + { + name: "exporter: tls key set but missing cert", + inst: Instrumentation{ + Spec: InstrumentationSpec{ + Sampler: Sampler{ + Type: ParentBasedTraceIDRatio, + Argument: "0.99", + }, + Exporter: Exporter{ + Endpoint: "https://collector:4317", + TLS: &TLS{ + Key: "key", + }, + }, + }, + }, + warnings: []string{"both exporter.tls.key and exporter.tls.cert mut be set"}, + }, + { + name: "exporter: tls set but using http://", + inst: Instrumentation{ + Spec: InstrumentationSpec{ + Sampler: Sampler{ + Type: ParentBasedTraceIDRatio, + Argument: "0.99", + }, + Exporter: Exporter{ + Endpoint: "http://collector:4317", + TLS: &TLS{ + Key: "key", + Cert: "cert", + }, + }, + }, + }, + warnings: []string{"exporter.tls is configured but exporter.endpoint is not enabling TLS with https://"}, + }, + { + name: "exporter: exporter using http://, but the tls is nil", + inst: Instrumentation{ + Spec: InstrumentationSpec{ + Sampler: Sampler{ + Type: ParentBasedTraceIDRatio, + Argument: "0.99", + }, + Exporter: Exporter{ + Endpoint: "https://collector:4317", + }, + }, + }, + warnings: []string{"exporter is using https:// but exporter.tls is unset"}, + }, + { + name: "exporter no warning set", + inst: Instrumentation{ + Spec: InstrumentationSpec{ + Sampler: Sampler{ + Type: ParentBasedTraceIDRatio, + Argument: "0.99", + }, + Exporter: Exporter{ + Endpoint: "https://collector:4317", + TLS: &TLS{ + Key: "key", + Cert: "cert", + }, + }, + }, + }, + }, } for _, test := range tests { diff --git a/apis/v1alpha1/targetallocator_webhook.go b/apis/v1alpha1/targetallocator_webhook.go index bed76f29a4..1a3687dd65 100644 --- a/apis/v1alpha1/targetallocator_webhook.go +++ b/apis/v1alpha1/targetallocator_webhook.go @@ -26,6 +26,7 @@ import ( "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" "github.com/open-telemetry/opentelemetry-operator/internal/config" + "github.com/open-telemetry/opentelemetry-operator/internal/naming" "github.com/open-telemetry/opentelemetry-operator/internal/rbac" ) @@ -119,7 +120,11 @@ func (w TargetAllocatorWebhook) validate(ctx context.Context, ta *TargetAllocato // if the prometheusCR is enabled, it needs a suite of permissions to function if ta.Spec.PrometheusCR.Enabled { - warnings, err := v1beta1.CheckTargetAllocatorPrometheusCRPolicyRules(ctx, w.reviewer, ta.Spec.ServiceAccount, ta.GetNamespace()) + saname := ta.Spec.ServiceAccount + if len(ta.Spec.ServiceAccount) == 0 { + saname = naming.TargetAllocatorServiceAccount(ta.Name) + } + warnings, err := v1beta1.CheckTargetAllocatorPrometheusCRPolicyRules(ctx, w.reviewer, ta.GetNamespace(), saname) if err != nil || len(warnings) > 0 { return warnings, err } diff --git a/apis/v1alpha1/targetallocator_webhook_test.go b/apis/v1alpha1/targetallocator_webhook_test.go index aedbb62c82..5e665368a2 100644 --- a/apis/v1alpha1/targetallocator_webhook_test.go +++ b/apis/v1alpha1/targetallocator_webhook_test.go @@ -224,6 +224,10 @@ func TestTargetAllocatorValidatingWebhook(t *testing.T) { name: "prom CR admissions warning", shouldFailSar: true, // force failure targetallocator: TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-ta", + Namespace: "test-ns", + }, Spec: TargetAllocatorSpec{ PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{ Enabled: true, @@ -231,18 +235,18 @@ func TestTargetAllocatorValidatingWebhook(t *testing.T) { }, }, expectedWarnings: []string{ - "missing the following rules for monitoring.coreos.com/servicemonitors: [*]", - "missing the following rules for monitoring.coreos.com/podmonitors: [*]", - "missing the following rules for nodes/metrics: [get,list,watch]", - "missing the following rules for services: [get,list,watch]", - "missing the following rules for endpoints: [get,list,watch]", - "missing the following rules for namespaces: [get,list,watch]", - "missing the following rules for networking.k8s.io/ingresses: [get,list,watch]", - "missing the following rules for nodes: [get,list,watch]", - "missing the following rules for pods: [get,list,watch]", - "missing the following rules for configmaps: [get]", - "missing the following rules for discovery.k8s.io/endpointslices: [get,list,watch]", - "missing the following rules for nonResourceURL: /metrics: [get]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - monitoring.coreos.com/servicemonitors: [*]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - monitoring.coreos.com/podmonitors: [*]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - nodes/metrics: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - services: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - endpoints: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - namespaces: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - networking.k8s.io/ingresses: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - nodes: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - pods: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - configmaps: [get]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - discovery.k8s.io/endpointslices: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - nonResourceURL: /metrics: [get]", }, }, { diff --git a/apis/v1alpha1/zz_generated.deepcopy.go b/apis/v1alpha1/zz_generated.deepcopy.go index 270c617e17..35c04992cb 100644 --- a/apis/v1alpha1/zz_generated.deepcopy.go +++ b/apis/v1alpha1/zz_generated.deepcopy.go @@ -31,6 +31,7 @@ import ( // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *ApacheHttpd) DeepCopyInto(out *ApacheHttpd) { *out = *in + in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate) if in.VolumeSizeLimit != nil { in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit x := (*in).DeepCopy() @@ -143,6 +144,7 @@ func (in *Defaults) DeepCopy() *Defaults { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *DotNet) DeepCopyInto(out *DotNet) { *out = *in + in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate) if in.VolumeSizeLimit != nil { in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit x := (*in).DeepCopy() @@ -171,6 +173,11 @@ func (in *DotNet) DeepCopy() *DotNet { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *Exporter) DeepCopyInto(out *Exporter) { *out = *in + if in.TLS != nil { + in, out := &in.TLS, &out.TLS + *out = new(TLS) + **out = **in + } } // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Exporter. @@ -201,6 +208,7 @@ func (in *Extensions) DeepCopy() *Extensions { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *Go) DeepCopyInto(out *Go) { *out = *in + in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate) if in.VolumeSizeLimit != nil { in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit x := (*in).DeepCopy() @@ -323,7 +331,7 @@ func (in *InstrumentationList) DeepCopyObject() runtime.Object { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *InstrumentationSpec) DeepCopyInto(out *InstrumentationSpec) { *out = *in - out.Exporter = in.Exporter + in.Exporter.DeepCopyInto(&out.Exporter) in.Resource.DeepCopyInto(&out.Resource) if in.Propagators != nil { in, out := &in.Propagators, &out.Propagators @@ -376,6 +384,7 @@ func (in *InstrumentationStatus) DeepCopy() *InstrumentationStatus { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *Java) DeepCopyInto(out *Java) { *out = *in + in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate) if in.VolumeSizeLimit != nil { in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit x := (*in).DeepCopy() @@ -444,6 +453,7 @@ func (in *MetricsConfigSpec) DeepCopy() *MetricsConfigSpec { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *Nginx) DeepCopyInto(out *Nginx) { *out = *in + in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate) if in.VolumeSizeLimit != nil { in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit x := (*in).DeepCopy() @@ -479,6 +489,7 @@ func (in *Nginx) DeepCopy() *Nginx { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *NodeJS) DeepCopyInto(out *NodeJS) { *out = *in + in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate) if in.VolumeSizeLimit != nil { in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit x := (*in).DeepCopy() @@ -1195,6 +1206,7 @@ func (in *Probe) DeepCopy() *Probe { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *Python) DeepCopyInto(out *Python) { *out = *in + in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate) if in.VolumeSizeLimit != nil { in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit x := (*in).DeepCopy() @@ -1272,6 +1284,21 @@ func (in *ScaleSubresourceStatus) DeepCopy() *ScaleSubresourceStatus { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *TLS) DeepCopyInto(out *TLS) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TLS. +func (in *TLS) DeepCopy() *TLS { + if in == nil { + return nil + } + out := new(TLS) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *TargetAllocator) DeepCopyInto(out *TargetAllocator) { *out = *in diff --git a/apis/v1beta1/collector_webhook.go b/apis/v1beta1/collector_webhook.go index e79754b4bd..d6ad88dcff 100644 --- a/apis/v1beta1/collector_webhook.go +++ b/apis/v1beta1/collector_webhook.go @@ -29,6 +29,7 @@ import ( "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/fips" ta "github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator/adapters" + "github.com/open-telemetry/opentelemetry-operator/internal/naming" "github.com/open-telemetry/opentelemetry-operator/internal/rbac" "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) @@ -121,7 +122,7 @@ func (c CollectorWebhook) ValidateCreate(ctx context.Context, obj runtime.Object c.metrics.create(ctx, otelcol) } if c.bv != nil { - newWarnings := c.bv(*otelcol) + newWarnings := c.bv(ctx, *otelcol) warnings = append(warnings, newWarnings...) } return warnings, nil @@ -151,7 +152,7 @@ func (c CollectorWebhook) ValidateUpdate(ctx context.Context, oldObj, newObj run } if c.bv != nil { - newWarnings := c.bv(*otelcol) + newWarnings := c.bv(ctx, *otelcol) warnings = append(warnings, newWarnings...) } return warnings, nil @@ -188,6 +189,11 @@ func (c CollectorWebhook) Validate(ctx context.Context, r *OpenTelemetryCollecto return warnings, fmt.Errorf("the OpenTelemetry Collector mode is set to %s, which does not support the attribute 'volumeClaimTemplates'", r.Spec.Mode) } + // validate persistentVolumeClaimRetentionPolicy + if r.Spec.Mode != ModeStatefulSet && r.Spec.PersistentVolumeClaimRetentionPolicy != nil { + return warnings, fmt.Errorf("the OpenTelemetry Collector mode is set to %s, which does not support the attribute 'persistentVolumeClaimRetentionPolicy'", r.Spec.Mode) + } + // validate tolerations if r.Spec.Mode == ModeSidecar && len(r.Spec.Tolerations) > 0 { return warnings, fmt.Errorf("the OpenTelemetry Collector mode is set to %s, which does not support the attribute 'tolerations'", r.Spec.Mode) @@ -336,8 +342,12 @@ func (c CollectorWebhook) validateTargetAllocatorConfig(ctx context.Context, r * } // if the prometheusCR is enabled, it needs a suite of permissions to function if r.Spec.TargetAllocator.PrometheusCR.Enabled { + saname := r.Spec.TargetAllocator.ServiceAccount + if len(r.Spec.TargetAllocator.ServiceAccount) == 0 { + saname = naming.TargetAllocatorServiceAccount(r.Name) + } warnings, err := CheckTargetAllocatorPrometheusCRPolicyRules( - ctx, c.reviewer, r.Spec.TargetAllocator.ServiceAccount, r.GetNamespace()) + ctx, c.reviewer, r.GetNamespace(), saname) if err != nil || len(warnings) > 0 { return warnings, err } @@ -385,13 +395,13 @@ func ValidatePorts(ports []PortsSpec) error { func checkAutoscalerSpec(autoscaler *AutoscalerSpec) error { if autoscaler.Behavior != nil { if autoscaler.Behavior.ScaleDown != nil && autoscaler.Behavior.ScaleDown.StabilizationWindowSeconds != nil && - *autoscaler.Behavior.ScaleDown.StabilizationWindowSeconds < int32(1) { - return fmt.Errorf("the OpenTelemetry Spec autoscale configuration is incorrect, scaleDown should be one or more") + (*autoscaler.Behavior.ScaleDown.StabilizationWindowSeconds < int32(0) || *autoscaler.Behavior.ScaleDown.StabilizationWindowSeconds > 3600) { + return fmt.Errorf("the OpenTelemetry Spec autoscale configuration is incorrect, scaleDown.stabilizationWindowSeconds should be >=0 and <=3600") } if autoscaler.Behavior.ScaleUp != nil && autoscaler.Behavior.ScaleUp.StabilizationWindowSeconds != nil && - *autoscaler.Behavior.ScaleUp.StabilizationWindowSeconds < int32(1) { - return fmt.Errorf("the OpenTelemetry Spec autoscale configuration is incorrect, scaleUp should be one or more") + (*autoscaler.Behavior.ScaleUp.StabilizationWindowSeconds < int32(0) || *autoscaler.Behavior.ScaleUp.StabilizationWindowSeconds > 3600) { + return fmt.Errorf("the OpenTelemetry Spec autoscale configuration is incorrect, scaleUp.stabilizationWindowSeconds should be >=0 and <=3600") } } if autoscaler.TargetCPUUtilization != nil && *autoscaler.TargetCPUUtilization < int32(1) { @@ -425,7 +435,7 @@ func checkAutoscalerSpec(autoscaler *AutoscalerSpec) error { // BuildValidator enables running the manifest generators for the collector reconciler // +kubebuilder:object:generate=false -type BuildValidator func(c OpenTelemetryCollector) admission.Warnings +type BuildValidator func(ctx context.Context, c OpenTelemetryCollector) admission.Warnings func NewCollectorWebhook( logger logr.Logger, diff --git a/apis/v1beta1/collector_webhook_test.go b/apis/v1beta1/collector_webhook_test.go index 0b6b915486..8604b91b3e 100644 --- a/apis/v1beta1/collector_webhook_test.go +++ b/apis/v1beta1/collector_webhook_test.go @@ -17,6 +17,7 @@ package v1beta1_test import ( "context" "fmt" + "math" "os" "testing" @@ -82,7 +83,7 @@ func TestValidate(t *testing.T) { }, } - bv := func(collector v1beta1.OpenTelemetryCollector) admission.Warnings { + bv := func(_ context.Context, collector v1beta1.OpenTelemetryCollector) admission.Warnings { var warnings admission.Warnings cfg := config.New( config.WithCollectorImage("default-collector"), @@ -168,7 +169,7 @@ func TestCollectorDefaultingWebhook(t *testing.T) { Mode: v1beta1.ModeDeployment, UpgradeStrategy: v1beta1.UpgradeStrategyAutomatic, Config: func() v1beta1.Config { - const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"endpoint":"0.0.0.0:4317"},"http":{"endpoint":"0.0.0.0:4318"}}}},"exporters":{"debug":null},"service":{"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}` + const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"endpoint":"0.0.0.0:4317"},"http":{"endpoint":"0.0.0.0:4318"}}}},"exporters":{"debug":null},"service":{"telemetry":{"metrics":{"address":"0.0.0.0:8888"}},"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}` var cfg v1beta1.Config require.NoError(t, yaml.Unmarshal([]byte(input), &cfg)) return cfg @@ -181,7 +182,7 @@ func TestCollectorDefaultingWebhook(t *testing.T) { otelcol: v1beta1.OpenTelemetryCollector{ Spec: v1beta1.OpenTelemetryCollectorSpec{ Config: func() v1beta1.Config { - const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"headers":{"example":"another"}},"http":{"endpoint":"0.0.0.0:4000"}}}},"exporters":{"debug":null},"service":{"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}` + const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"headers":{"example":"another"}},"http":{"endpoint":"0.0.0.0:4000"}}}},"exporters":{"debug":null},"service":{"telemetry":{"metrics":{"address":"1.2.3.4:7654"}},"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}` var cfg v1beta1.Config require.NoError(t, yaml.Unmarshal([]byte(input), &cfg)) return cfg @@ -200,7 +201,7 @@ func TestCollectorDefaultingWebhook(t *testing.T) { Mode: v1beta1.ModeDeployment, UpgradeStrategy: v1beta1.UpgradeStrategyAutomatic, Config: func() v1beta1.Config { - const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"endpoint":"0.0.0.0:4317","headers":{"example":"another"}},"http":{"endpoint":"0.0.0.0:4000"}}}},"exporters":{"debug":null},"service":{"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}` + const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"endpoint":"0.0.0.0:4317","headers":{"example":"another"}},"http":{"endpoint":"0.0.0.0:4000"}}}},"exporters":{"debug":null},"service":{"telemetry":{"metrics":{"address":"1.2.3.4:7654"}},"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}` var cfg v1beta1.Config require.NoError(t, yaml.Unmarshal([]byte(input), &cfg)) return cfg @@ -517,7 +518,7 @@ func TestCollectorDefaultingWebhook(t *testing.T) { }, } - bv := func(collector v1beta1.OpenTelemetryCollector) admission.Warnings { + bv := func(_ context.Context, collector v1beta1.OpenTelemetryCollector) admission.Warnings { var warnings admission.Warnings cfg := config.New( config.WithCollectorImage("default-collector"), @@ -553,6 +554,9 @@ func TestCollectorDefaultingWebhook(t *testing.T) { ) ctx := context.Background() err := cvw.Default(ctx, &test.otelcol) + if test.expected.Spec.Config.Service.Telemetry == nil { + assert.NoError(t, test.expected.Spec.Config.Service.ApplyDefaults(), "could not apply defaults") + } assert.NoError(t, err) assert.Equal(t, test.expected, test.otelcol) }) @@ -582,6 +586,7 @@ func TestOTELColValidatingWebhook(t *testing.T) { one := int32(1) three := int32(3) five := int32(5) + maxInt := int32(math.MaxInt32) cfg := v1beta1.Config{} err := yaml.Unmarshal([]byte(cfgYaml), &cfg) @@ -646,6 +651,10 @@ func TestOTELColValidatingWebhook(t *testing.T) { name: "prom CR admissions warning", shouldFailSar: true, // force failure otelcol: v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "adm-warning", + Namespace: "test-ns", + }, Spec: v1beta1.OpenTelemetryCollectorSpec{ Mode: v1beta1.ModeStatefulSet, OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ @@ -688,18 +697,18 @@ func TestOTELColValidatingWebhook(t *testing.T) { }, }, expectedWarnings: []string{ - "missing the following rules for monitoring.coreos.com/servicemonitors: [*]", - "missing the following rules for monitoring.coreos.com/podmonitors: [*]", - "missing the following rules for nodes/metrics: [get,list,watch]", - "missing the following rules for services: [get,list,watch]", - "missing the following rules for endpoints: [get,list,watch]", - "missing the following rules for namespaces: [get,list,watch]", - "missing the following rules for networking.k8s.io/ingresses: [get,list,watch]", - "missing the following rules for nodes: [get,list,watch]", - "missing the following rules for pods: [get,list,watch]", - "missing the following rules for configmaps: [get]", - "missing the following rules for discovery.k8s.io/endpointslices: [get,list,watch]", - "missing the following rules for nonResourceURL: /metrics: [get]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - monitoring.coreos.com/servicemonitors: [*]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - monitoring.coreos.com/podmonitors: [*]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - nodes/metrics: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - services: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - endpoints: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - namespaces: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - networking.k8s.io/ingresses: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - nodes: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - pods: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - configmaps: [get]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - discovery.k8s.io/endpointslices: [get,list,watch]", + "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - nonResourceURL: /metrics: [get]", }, }, { @@ -758,6 +767,21 @@ func TestOTELColValidatingWebhook(t *testing.T) { }, expectedErr: "does not support the attribute 'volumeClaimTemplates'", }, + { + name: "invalid mode with persistentVolumeClaimRetentionPolicy", + otelcol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Mode: v1beta1.ModeSidecar, + StatefulSetCommonFields: v1beta1.StatefulSetCommonFields{ + PersistentVolumeClaimRetentionPolicy: &appsv1.StatefulSetPersistentVolumeClaimRetentionPolicy{ + WhenDeleted: appsv1.RetainPersistentVolumeClaimRetentionPolicyType, + WhenScaled: appsv1.DeletePersistentVolumeClaimRetentionPolicyType, + }, + }, + }, + }, + expectedErr: "does not support the attribute 'persistentVolumeClaimRetentionPolicy'", + }, { name: "invalid mode with tolerations", otelcol: v1beta1.OpenTelemetryCollector{ @@ -913,36 +937,68 @@ func TestOTELColValidatingWebhook(t *testing.T) { expectedErr: "minReplicas should be one or more", }, { - name: "invalid autoscaler scale down", + name: "invalid autoscaler scale down stablization window - <0", otelcol: v1beta1.OpenTelemetryCollector{ Spec: v1beta1.OpenTelemetryCollectorSpec{ Autoscaler: &v1beta1.AutoscalerSpec{ MaxReplicas: &three, Behavior: &autoscalingv2.HorizontalPodAutoscalerBehavior{ ScaleDown: &autoscalingv2.HPAScalingRules{ - StabilizationWindowSeconds: &zero, + StabilizationWindowSeconds: &minusOne, + }, + }, + }, + }, + }, + expectedErr: "scaleDown.stabilizationWindowSeconds should be >=0 and <=3600", + }, + { + name: "invalid autoscaler scale down stablization window - >3600", + otelcol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Autoscaler: &v1beta1.AutoscalerSpec{ + MaxReplicas: &three, + Behavior: &autoscalingv2.HorizontalPodAutoscalerBehavior{ + ScaleDown: &autoscalingv2.HPAScalingRules{ + StabilizationWindowSeconds: &maxInt, + }, + }, + }, + }, + }, + expectedErr: "scaleDown.stabilizationWindowSeconds should be >=0 and <=3600", + }, + { + name: "invalid autoscaler scale up stablization window - <0", + otelcol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Autoscaler: &v1beta1.AutoscalerSpec{ + MaxReplicas: &three, + Behavior: &autoscalingv2.HorizontalPodAutoscalerBehavior{ + ScaleUp: &autoscalingv2.HPAScalingRules{ + StabilizationWindowSeconds: &minusOne, }, }, }, }, }, - expectedErr: "scaleDown should be one or more", + expectedErr: "scaleUp.stabilizationWindowSeconds should be >=0 and <=3600", }, { - name: "invalid autoscaler scale up", + name: "invalid autoscaler scale up stablization window - >3600", otelcol: v1beta1.OpenTelemetryCollector{ Spec: v1beta1.OpenTelemetryCollectorSpec{ Autoscaler: &v1beta1.AutoscalerSpec{ MaxReplicas: &three, Behavior: &autoscalingv2.HorizontalPodAutoscalerBehavior{ ScaleUp: &autoscalingv2.HPAScalingRules{ - StabilizationWindowSeconds: &zero, + StabilizationWindowSeconds: &maxInt, }, }, }, }, }, - expectedErr: "scaleUp should be one or more", + expectedErr: "scaleUp.stabilizationWindowSeconds should be >=0 and <=3600", }, { name: "invalid autoscaler target cpu utilization", @@ -1309,7 +1365,7 @@ func TestOTELColValidatingWebhook(t *testing.T) { }, } - bv := func(collector v1beta1.OpenTelemetryCollector) admission.Warnings { + bv := func(_ context.Context, collector v1beta1.OpenTelemetryCollector) admission.Warnings { var warnings admission.Warnings cfg := config.New( config.WithCollectorImage("default-collector"), @@ -1377,7 +1433,7 @@ func TestOTELColValidateUpdateWebhook(t *testing.T) { }, } - bv := func(collector v1beta1.OpenTelemetryCollector) admission.Warnings { + bv := func(_ context.Context, collector v1beta1.OpenTelemetryCollector) admission.Warnings { var warnings admission.Warnings cfg := config.New( config.WithCollectorImage("default-collector"), diff --git a/apis/v1beta1/common.go b/apis/v1beta1/common.go index cf31de5118..77044771a5 100644 --- a/apis/v1beta1/common.go +++ b/apis/v1beta1/common.go @@ -15,6 +15,7 @@ package v1beta1 import ( + appsv1 "k8s.io/api/apps/v1" autoscalingv2 "k8s.io/api/autoscaling/v2" v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/util/intstr" @@ -243,4 +244,9 @@ type StatefulSetCommonFields struct { // +optional // +listType=atomic VolumeClaimTemplates []v1.PersistentVolumeClaim `json:"volumeClaimTemplates,omitempty"` + // PersistentVolumeClaimRetentionPolicy describes the lifecycle of persistent volume claims + // created from volumeClaimTemplates. + // This only works with the following OpenTelemetryCollector modes: statefulset. + // +optional + PersistentVolumeClaimRetentionPolicy *appsv1.StatefulSetPersistentVolumeClaimRetentionPolicy `json:"persistentVolumeClaimRetentionPolicy,omitempty"` } diff --git a/apis/v1beta1/config.go b/apis/v1beta1/config.go index 2d88c7617e..5cb9150513 100644 --- a/apis/v1beta1/config.go +++ b/apis/v1beta1/config.go @@ -206,7 +206,12 @@ func (c *Config) getPortsForComponentKinds(logger logr.Logger, componentKinds .. case KindProcessor: continue case KindExtension: - continue + retriever = extensions.ParserFor + if c.Extensions == nil { + cfg = AnyConfig{} + } else { + cfg = *c.Extensions + } } for componentName := range enabledComponents[componentKind] { // TODO: Clean up the naming here and make it simpler to use a retriever. @@ -226,8 +231,47 @@ func (c *Config) getPortsForComponentKinds(logger logr.Logger, componentKinds .. return ports, nil } +// getEnvironmentVariablesForComponentKinds gets the environment variables for the given ComponentKind(s). +func (c *Config) getEnvironmentVariablesForComponentKinds(logger logr.Logger, componentKinds ...ComponentKind) ([]corev1.EnvVar, error) { + var envVars []corev1.EnvVar = []corev1.EnvVar{} + enabledComponents := c.GetEnabledComponents() + for _, componentKind := range componentKinds { + var retriever components.ParserRetriever + var cfg AnyConfig + + switch componentKind { + case KindReceiver: + retriever = receivers.ReceiverFor + cfg = c.Receivers + case KindExporter: + continue + case KindProcessor: + continue + case KindExtension: + continue + } + for componentName := range enabledComponents[componentKind] { + parser := retriever(componentName) + if parsedEnvVars, err := parser.GetEnvironmentVariables(logger, cfg.Object[componentName]); err != nil { + return nil, err + } else { + envVars = append(envVars, parsedEnvVars...) + } + } + } + + sort.Slice(envVars, func(i, j int) bool { + return envVars[i].Name < envVars[j].Name + }) + + return envVars, nil +} + // applyDefaultForComponentKinds applies defaults to the endpoints for the given ComponentKind(s). func (c *Config) applyDefaultForComponentKinds(logger logr.Logger, componentKinds ...ComponentKind) error { + if err := c.Service.ApplyDefaults(); err != nil { + return err + } enabledComponents := c.GetEnabledComponents() for _, componentKind := range componentKinds { var retriever components.ParserRetriever @@ -279,10 +323,22 @@ func (c *Config) GetExporterPorts(logger logr.Logger) ([]corev1.ServicePort, err return c.getPortsForComponentKinds(logger, KindExporter) } -func (c *Config) GetAllPorts(logger logr.Logger) ([]corev1.ServicePort, error) { +func (c *Config) GetExtensionPorts(logger logr.Logger) ([]corev1.ServicePort, error) { + return c.getPortsForComponentKinds(logger, KindExtension) +} + +func (c *Config) GetReceiverAndExporterPorts(logger logr.Logger) ([]corev1.ServicePort, error) { return c.getPortsForComponentKinds(logger, KindReceiver, KindExporter) } +func (c *Config) GetAllPorts(logger logr.Logger) ([]corev1.ServicePort, error) { + return c.getPortsForComponentKinds(logger, KindReceiver, KindExporter, KindExtension) +} + +func (c *Config) GetEnvironmentVariables(logger logr.Logger) ([]corev1.EnvVar, error) { + return c.getEnvironmentVariablesForComponentKinds(logger, KindReceiver) +} + func (c *Config) GetAllRbacRules(logger logr.Logger) ([]rbacv1.PolicyRule, error) { return c.getRbacRulesForComponentKinds(logger, KindReceiver, KindExporter, KindProcessor) } @@ -371,24 +427,55 @@ type Service struct { Pipelines map[string]*Pipeline `json:"pipelines" yaml:"pipelines"` } -// MetricsPort gets the port number for the metrics endpoint from the collector config if it has been set. -func (s *Service) MetricsPort() (int32, error) { +// MetricsEndpoint gets the port number and host address for the metrics endpoint from the collector config if it has been set. +func (s *Service) MetricsEndpoint() (string, int32, error) { + defaultAddr := "0.0.0.0" if s.GetTelemetry() == nil { // telemetry isn't set, use the default - return 8888, nil + return defaultAddr, 8888, nil } - _, port, netErr := net.SplitHostPort(s.GetTelemetry().Metrics.Address) + host, port, netErr := net.SplitHostPort(s.GetTelemetry().Metrics.Address) if netErr != nil && strings.Contains(netErr.Error(), "missing port in address") { - return 8888, nil + return defaultAddr, 8888, nil } else if netErr != nil { - return 0, netErr + return "", 0, netErr } i64, err := strconv.ParseInt(port, 10, 32) if err != nil { - return 0, err + return "", 0, err } - return int32(i64), nil + if host == "" { + host = defaultAddr + } + + return host, int32(i64), nil +} + +// ApplyDefaults inserts configuration defaults if it has not been set. +func (s *Service) ApplyDefaults() error { + telemetryAddr, telemetryPort, err := s.MetricsEndpoint() + if err != nil { + return err + } + tm := &AnyConfig{ + Object: map[string]interface{}{ + "metrics": map[string]interface{}{ + "address": fmt.Sprintf("%s:%d", telemetryAddr, telemetryPort), + }, + }, + } + + if s.Telemetry == nil { + s.Telemetry = tm + return nil + } + // NOTE: Merge without overwrite. If a telemetry endpoint is specified, the defaulting + // respects the configuration and returns an equal value. + if err := mergo.Merge(s.Telemetry, tm); err != nil { + return fmt.Errorf("telemetry config merge failed: %w", err) + } + return nil } // MetricsConfig comes from the collector. diff --git a/apis/v1beta1/config_test.go b/apis/v1beta1/config_test.go index 31895b3252..b9c288f692 100644 --- a/apis/v1beta1/config_test.go +++ b/apis/v1beta1/config_test.go @@ -220,11 +220,13 @@ func TestConfigToMetricsPort(t *testing.T) { for _, tt := range []struct { desc string + expectedAddr string expectedPort int32 config Service }{ { "custom port", + "0.0.0.0", 9090, Service{ Telemetry: &AnyConfig{ @@ -238,6 +240,7 @@ func TestConfigToMetricsPort(t *testing.T) { }, { "bad address", + "0.0.0.0", 8888, Service{ Telemetry: &AnyConfig{ @@ -251,6 +254,7 @@ func TestConfigToMetricsPort(t *testing.T) { }, { "missing address", + "0.0.0.0", 8888, Service{ Telemetry: &AnyConfig{ @@ -264,6 +268,7 @@ func TestConfigToMetricsPort(t *testing.T) { }, { "missing metrics", + "0.0.0.0", 8888, Service{ Telemetry: &AnyConfig{}, @@ -271,14 +276,30 @@ func TestConfigToMetricsPort(t *testing.T) { }, { "missing telemetry", + "0.0.0.0", 8888, Service{}, }, + { + "configured telemetry", + "1.2.3.4", + 4567, + Service{ + Telemetry: &AnyConfig{ + Object: map[string]interface{}{ + "metrics": map[string]interface{}{ + "address": "1.2.3.4:4567", + }, + }, + }, + }, + }, } { t.Run(tt.desc, func(t *testing.T) { // these are acceptable failures, we return to the collector's default metric port - port, err := tt.config.MetricsPort() + addr, port, err := tt.config.MetricsEndpoint() assert.NoError(t, err) + assert.Equal(t, tt.expectedAddr, addr) assert.Equal(t, tt.expectedPort, port) }) } @@ -402,6 +423,66 @@ func TestConfig_GetEnabledComponents(t *testing.T) { } } +func TestConfig_getEnvironmentVariablesForComponentKinds(t *testing.T) { + tests := []struct { + name string + config *Config + componentKinds []ComponentKind + envVarsLen int + }{ + { + name: "no env vars", + config: &Config{ + Receivers: AnyConfig{ + Object: map[string]interface{}{ + "myreceiver": map[string]interface{}{ + "env": "test", + }, + }, + }, + Service: Service{ + Pipelines: map[string]*Pipeline{ + "test": { + Receivers: []string{"myreceiver"}, + }, + }, + }, + }, + componentKinds: []ComponentKind{KindReceiver}, + envVarsLen: 0, + }, + { + name: "kubeletstats env vars", + config: &Config{ + Receivers: AnyConfig{ + Object: map[string]interface{}{ + "kubeletstats": map[string]interface{}{}, + }, + }, + Service: Service{ + Pipelines: map[string]*Pipeline{ + "test": { + Receivers: []string{"kubeletstats"}, + }, + }, + }, + }, + componentKinds: []ComponentKind{KindReceiver}, + envVarsLen: 1, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + logger := logr.Discard() + envVars, err := tt.config.GetEnvironmentVariables(logger) + + assert.NoError(t, err) + assert.Len(t, envVars, tt.envVarsLen) + }) + } +} + func TestConfig_GetReceiverPorts(t *testing.T) { tests := []struct { name string diff --git a/apis/v1beta1/targetallocator_rbac.go b/apis/v1beta1/targetallocator_rbac.go index 4fb48832e6..2ef66b4541 100644 --- a/apis/v1beta1/targetallocator_rbac.go +++ b/apis/v1beta1/targetallocator_rbac.go @@ -61,8 +61,8 @@ func CheckTargetAllocatorPrometheusCRPolicyRules( serviceAccountName string) (warnings []string, err error) { subjectAccessReviews, err := reviewer.CheckPolicyRules( ctx, - namespace, serviceAccountName, + namespace, targetAllocatorCRPolicyRules..., ) if err != nil { diff --git a/apis/v1beta1/zz_generated.deepcopy.go b/apis/v1beta1/zz_generated.deepcopy.go index eaf24ed0ba..b508f0be76 100644 --- a/apis/v1beta1/zz_generated.deepcopy.go +++ b/apis/v1beta1/zz_generated.deepcopy.go @@ -19,6 +19,7 @@ package v1beta1 import ( + appsv1 "k8s.io/api/apps/v1" "k8s.io/api/autoscaling/v2" "k8s.io/api/core/v1" networkingv1 "k8s.io/api/networking/v1" @@ -680,6 +681,11 @@ func (in *StatefulSetCommonFields) DeepCopyInto(out *StatefulSetCommonFields) { (*in)[i].DeepCopyInto(&(*out)[i]) } } + if in.PersistentVolumeClaimRetentionPolicy != nil { + in, out := &in.PersistentVolumeClaimRetentionPolicy, &out.PersistentVolumeClaimRetentionPolicy + *out = new(appsv1.StatefulSetPersistentVolumeClaimRetentionPolicy) + **out = **in + } } // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StatefulSetCommonFields. diff --git a/autoinstrumentation/dotnet/version.txt b/autoinstrumentation/dotnet/version.txt index 27f9cd322b..f8e233b273 100644 --- a/autoinstrumentation/dotnet/version.txt +++ b/autoinstrumentation/dotnet/version.txt @@ -1 +1 @@ -1.8.0 +1.9.0 diff --git a/autoinstrumentation/java/version.txt b/autoinstrumentation/java/version.txt index 834f262953..10c2c0c3d6 100644 --- a/autoinstrumentation/java/version.txt +++ b/autoinstrumentation/java/version.txt @@ -1 +1 @@ -2.8.0 +2.10.0 diff --git a/autoinstrumentation/nodejs/package.json b/autoinstrumentation/nodejs/package.json index 7e5886a89c..11fc4006ce 100644 --- a/autoinstrumentation/nodejs/package.json +++ b/autoinstrumentation/nodejs/package.json @@ -10,21 +10,12 @@ }, "devDependencies": { "copyfiles": "^2.4.1", - "rimraf": "^5.0.8", - "typescript": "^5.5.3" + "rimraf": "^6.0.1", + "typescript": "^5.6.3" }, "dependencies": { - "@opentelemetry/api": "1.9.0", - "@opentelemetry/auto-instrumentations-node": "0.48.0", - "@opentelemetry/exporter-metrics-otlp-grpc": "0.52.1", - "@opentelemetry/exporter-prometheus": "0.52.1", - "@opentelemetry/exporter-trace-otlp-grpc": "0.52.1", - "@opentelemetry/resource-detector-alibaba-cloud": "0.28.10", - "@opentelemetry/resource-detector-aws": "1.5.2", - "@opentelemetry/resource-detector-container": "0.3.11", - "@opentelemetry/resource-detector-gcp": "0.29.10", - "@opentelemetry/resources": "1.25.1", - "@opentelemetry/sdk-metrics": "1.25.1", - "@opentelemetry/sdk-node": "0.52.1" + "@opentelemetry/exporter-metrics-otlp-grpc": "0.55.0", + "@opentelemetry/auto-instrumentations-node": "0.53.0", + "@opentelemetry/exporter-prometheus": "0.55.0" } } diff --git a/autoinstrumentation/nodejs/src/autoinstrumentation.ts b/autoinstrumentation/nodejs/src/autoinstrumentation.ts index 928e6d5578..2a4aabc4a7 100644 --- a/autoinstrumentation/nodejs/src/autoinstrumentation.ts +++ b/autoinstrumentation/nodejs/src/autoinstrumentation.ts @@ -1,5 +1,7 @@ import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node'; -import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc'; +import { OTLPTraceExporter as OTLPProtoTraceExporter } from '@opentelemetry/exporter-trace-otlp-proto'; +import { OTLPTraceExporter as OTLPHttpTraceExporter } from '@opentelemetry/exporter-trace-otlp-http'; +import { OTLPTraceExporter as OTLPGrpcTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc'; import { OTLPMetricExporter } from '@opentelemetry/exporter-metrics-otlp-grpc'; import { PrometheusExporter } from '@opentelemetry/exporter-prometheus'; import { PeriodicExportingMetricReader } from '@opentelemetry/sdk-metrics'; @@ -12,6 +14,22 @@ import { diag } from '@opentelemetry/api'; import { NodeSDK } from '@opentelemetry/sdk-node'; +function getTraceExporter() { + let protocol = process.env.OTEL_EXPORTER_OTLP_PROTOCOL; + switch (protocol) { + case undefined: + case '': + case 'grpc': + return new OTLPGrpcTraceExporter(); + case 'http/json': + return new OTLPHttpTraceExporter(); + case 'http/protobuf': + return new OTLPProtoTraceExporter(); + default: + throw Error(`Creating traces exporter based on "${protocol}" protocol (configured via environment variable OTEL_EXPORTER_OTLP_PROTOCOL) is not implemented!`); + } +} + function getMetricReader() { switch (process.env.OTEL_METRICS_EXPORTER) { case undefined: @@ -35,7 +53,7 @@ function getMetricReader() { const sdk = new NodeSDK({ autoDetectResources: true, instrumentations: [getNodeAutoInstrumentations()], - traceExporter: new OTLPTraceExporter(), + traceExporter: getTraceExporter(), metricReader: getMetricReader(), resourceDetectors: [ diff --git a/autoinstrumentation/python/Dockerfile b/autoinstrumentation/python/Dockerfile index 9a6dfa7403..2546cf61ac 100644 --- a/autoinstrumentation/python/Dockerfile +++ b/autoinstrumentation/python/Dockerfile @@ -1,12 +1,12 @@ # To build one auto-instrumentation image for Python, please: -# - Ensure the packages are installed in the `/autoinstrumentation` directory. This is required as when instrumenting the pod, -# one init container will be created to copy all the content in `/autoinstrumentation` directory to your app's container. Then +# - Ensure the packages are installed in the `/autoinstrumentation,{-musl}` directory. This is required as when instrumenting the pod, +# one init container will be created to copy all the content in `/autoinstrumentation{,-musl}` directory to your app's container. Then # update the `PYTHONPATH` environment variable accordingly. To achieve this, you can mimic the one in `autoinstrumentation/python/Dockerfile` # by using multi-stage builds. In the first stage, install all the required packages in one custom directory with `pip install --target`. -# Then in the second stage, copy the directory to `/autoinstrumentation`. +# Then in the second stage, copy the directory to `/autoinstrumentation{,-musl}`. # - Ensure you have `opentelemetry-distro` and `opentelemetry-instrumentation` or your customized alternatives installed. # Those two packages are essential to Python auto-instrumentation. -# - Grant the necessary access to `/autoinstrumentation` directory. `chmod -R go+r /autoinstrumentation` +# - Grant the necessary access to `/autoinstrumentation{,-musl}` directory. `chmod -R go+r /autoinstrumentation` # - For auto-instrumentation by container injection, the Linux command cp is # used and must be availabe in the image. FROM python:3.11 AS build @@ -17,8 +17,19 @@ ADD requirements.txt . RUN mkdir workspace && pip install --target workspace -r requirements.txt +FROM python:3.11-alpine AS build-musl + +WORKDIR /operator-build + +ADD requirements.txt . + +RUN apk add gcc python3-dev musl-dev linux-headers +RUN mkdir workspace && pip install --target workspace -r requirements.txt + FROM busybox COPY --from=build /operator-build/workspace /autoinstrumentation +COPY --from=build-musl /operator-build/workspace /autoinstrumentation-musl RUN chmod -R go+r /autoinstrumentation +RUN chmod -R go+r /autoinstrumentation-musl diff --git a/bundle/community/manifests/opentelemetry-operator.clusterserviceversion.yaml b/bundle/community/manifests/opentelemetry-operator.clusterserviceversion.yaml index 3b1454f8d6..b7a24d1dde 100644 --- a/bundle/community/manifests/opentelemetry-operator.clusterserviceversion.yaml +++ b/bundle/community/manifests/opentelemetry-operator.clusterserviceversion.yaml @@ -99,13 +99,13 @@ metadata: categories: Logging & Tracing,Monitoring certified: "false" containerImage: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator - createdAt: "2024-09-19T17:15:52Z" + createdAt: "2024-11-27T11:54:33Z" description: Provides the OpenTelemetry components, including the Collector operators.operatorframework.io/builder: operator-sdk-v1.29.0 operators.operatorframework.io/project_layout: go.kubebuilder.io/v3 repository: github.com/open-telemetry/opentelemetry-operator support: OpenTelemetry Community - name: opentelemetry-operator.v0.109.0 + name: opentelemetry-operator.v0.114.0 namespace: placeholder spec: apiservicedefinitions: {} @@ -284,7 +284,9 @@ spec: - "" resources: - namespaces + - secrets verbs: + - get - list - watch - apiGroups: @@ -387,6 +389,7 @@ spec: - opentelemetry.io resources: - opampbridges + - targetallocators verbs: - create - delete @@ -407,6 +410,7 @@ spec: - opampbridges/status - opentelemetrycollectors/finalizers - opentelemetrycollectors/status + - targetallocators/status verbs: - get - patch @@ -479,7 +483,7 @@ spec: valueFrom: fieldRef: fieldPath: spec.serviceAccountName - image: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator:0.109.0 + image: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator:0.114.0 livenessProbe: httpGet: path: /healthz @@ -510,7 +514,7 @@ spec: - --upstream=http://127.0.0.1:8080/ - --logtostderr=true - --v=0 - image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1 + image: quay.io/brancz/kube-rbac-proxy:v0.13.1 name: kube-rbac-proxy ports: - containerPort: 8443 @@ -587,7 +591,7 @@ spec: minKubeVersion: 1.23.0 provider: name: OpenTelemetry Community - version: 0.109.0 + version: 0.114.0 webhookdefinitions: - admissionReviewVersions: - v1alpha1 diff --git a/bundle/community/manifests/opentelemetry.io_instrumentations.yaml b/bundle/community/manifests/opentelemetry.io_instrumentations.yaml index 76f050bf0d..d8077d3867 100644 --- a/bundle/community/manifests/opentelemetry.io_instrumentations.yaml +++ b/bundle/community/manifests/opentelemetry.io_instrumentations.yaml @@ -217,6 +217,118 @@ spec: type: object version: type: string + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -332,6 +444,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -409,6 +633,19 @@ spec: properties: endpoint: type: string + tls: + properties: + ca_file: + type: string + cert_file: + type: string + configMapName: + type: string + key_file: + type: string + secretName: + type: string + type: object type: object go: properties: @@ -513,6 +750,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -635,6 +984,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -813,6 +1274,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -923,6 +1496,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -1046,6 +1731,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer diff --git a/bundle/community/manifests/opentelemetry.io_opentelemetrycollectors.yaml b/bundle/community/manifests/opentelemetry.io_opentelemetrycollectors.yaml index 594e0f4aea..6ccb1c9e5f 100644 --- a/bundle/community/manifests/opentelemetry.io_opentelemetrycollectors.yaml +++ b/bundle/community/manifests/opentelemetry.io_opentelemetrycollectors.yaml @@ -6963,6 +6963,13 @@ spec: type: boolean type: object type: object + persistentVolumeClaimRetentionPolicy: + properties: + whenDeleted: + type: string + whenScaled: + type: string + type: object podAnnotations: additionalProperties: type: string diff --git a/bundle/openshift/manifests/opentelemetry-operator-controller-manager-metrics-service_v1_service.yaml b/bundle/openshift/manifests/opentelemetry-operator-controller-manager-metrics-service_v1_service.yaml index 66b0879b4d..a57cc212d5 100644 --- a/bundle/openshift/manifests/opentelemetry-operator-controller-manager-metrics-service_v1_service.yaml +++ b/bundle/openshift/manifests/opentelemetry-operator-controller-manager-metrics-service_v1_service.yaml @@ -1,6 +1,8 @@ apiVersion: v1 kind: Service metadata: + annotations: + service.beta.openshift.io/serving-cert-secret-name: opentelemetry-operator-metrics creationTimestamp: null labels: app.kubernetes.io/name: opentelemetry-operator diff --git a/bundle/openshift/manifests/opentelemetry-operator-prometheus-rules_monitoring.coreos.com_v1_prometheusrule.yaml b/bundle/openshift/manifests/opentelemetry-operator-prometheus-rules_monitoring.coreos.com_v1_prometheusrule.yaml new file mode 100644 index 0000000000..e6b5531887 --- /dev/null +++ b/bundle/openshift/manifests/opentelemetry-operator-prometheus-rules_monitoring.coreos.com_v1_prometheusrule.yaml @@ -0,0 +1,24 @@ +apiVersion: monitoring.coreos.com/v1 +kind: PrometheusRule +metadata: + labels: + app.kubernetes.io/managed-by: operator-lifecycle-manager + app.kubernetes.io/name: opentelemetry-operator + app.kubernetes.io/part-of: opentelemetry-operator + name: opentelemetry-operator-prometheus-rules +spec: + groups: + - name: opentelemetry-operator-monitoring.rules + rules: + - expr: sum by (type) (opentelemetry_collector_receivers) + record: type:opentelemetry_collector_receivers:sum + - expr: sum by (type) (opentelemetry_collector_exporters) + record: type:opentelemetry_collector_exporters:sum + - expr: sum by (type) (opentelemetry_collector_processors) + record: type:opentelemetry_collector_processors:sum + - expr: sum by (type) (opentelemetry_collector_extensions) + record: type:opentelemetry_collector_extensions:sum + - expr: sum by (type) (opentelemetry_collector_connectors) + record: type:opentelemetry_collector_connectors:sum + - expr: sum by (type) (opentelemetry_collector_info) + record: type:opentelemetry_collector_info:sum diff --git a/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml b/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml new file mode 100644 index 0000000000..9895de1183 --- /dev/null +++ b/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml @@ -0,0 +1,15 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: opentelemetry-operator-prometheus +rules: +- apiGroups: + - "" + resources: + - services + - endpoints + - pods + verbs: + - get + - list + - watch diff --git a/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml b/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml new file mode 100644 index 0000000000..db617726d5 --- /dev/null +++ b/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml @@ -0,0 +1,12 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: opentelemetry-operator-prometheus +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: opentelemetry-operator-prometheus +subjects: +- kind: ServiceAccount + name: prometheus-k8s + namespace: openshift-monitoring diff --git a/bundle/openshift/manifests/opentelemetry-operator.clusterserviceversion.yaml b/bundle/openshift/manifests/opentelemetry-operator.clusterserviceversion.yaml index 70db688513..751ef48728 100644 --- a/bundle/openshift/manifests/opentelemetry-operator.clusterserviceversion.yaml +++ b/bundle/openshift/manifests/opentelemetry-operator.clusterserviceversion.yaml @@ -99,13 +99,13 @@ metadata: categories: Logging & Tracing,Monitoring certified: "false" containerImage: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator - createdAt: "2024-09-19T17:16:12Z" + createdAt: "2024-11-27T11:54:33Z" description: Provides the OpenTelemetry components, including the Collector operators.operatorframework.io/builder: operator-sdk-v1.29.0 operators.operatorframework.io/project_layout: go.kubebuilder.io/v3 repository: github.com/open-telemetry/opentelemetry-operator support: OpenTelemetry Community - name: opentelemetry-operator.v0.109.0 + name: opentelemetry-operator.v0.114.0 namespace: placeholder spec: apiservicedefinitions: {} @@ -284,7 +284,9 @@ spec: - "" resources: - namespaces + - secrets verbs: + - get - list - watch - apiGroups: @@ -387,6 +389,7 @@ spec: - opentelemetry.io resources: - opampbridges + - targetallocators verbs: - create - delete @@ -407,6 +410,7 @@ spec: - opampbridges/status - opentelemetrycollectors/finalizers - opentelemetrycollectors/status + - targetallocators/status verbs: - get - patch @@ -475,15 +479,15 @@ spec: - --zap-time-encoding=rfc3339nano - --enable-nginx-instrumentation=true - --enable-go-instrumentation=true - - --enable-multi-instrumentation=true - --openshift-create-dashboard=true - --feature-gates=+operator.observability.prometheus + - --enable-cr-metrics=true env: - name: SERVICE_ACCOUNT_NAME valueFrom: fieldRef: fieldPath: spec.serviceAccountName - image: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator:0.109.0 + image: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator:0.114.0 livenessProbe: httpGet: path: /healthz @@ -514,7 +518,11 @@ spec: - --upstream=http://127.0.0.1:8080/ - --logtostderr=true - --v=0 - image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1 + - --tls-cert-file=/var/run/tls/server/tls.crt + - --tls-private-key-file=/var/run/tls/server/tls.key + - --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA256 + - --tls-min-version=VersionTLS12 + image: quay.io/brancz/kube-rbac-proxy:v0.13.1 name: kube-rbac-proxy ports: - containerPort: 8443 @@ -527,9 +535,16 @@ spec: requests: cpu: 5m memory: 64Mi + volumeMounts: + - mountPath: /var/run/tls/server + name: opentelemetry-operator-metrics-cert serviceAccountName: opentelemetry-operator-controller-manager terminationGracePeriodSeconds: 10 volumes: + - name: opentelemetry-operator-metrics-cert + secret: + defaultMode: 420 + secretName: opentelemetry-operator-metrics - name: cert secret: defaultMode: 420 @@ -591,7 +606,7 @@ spec: minKubeVersion: 1.23.0 provider: name: OpenTelemetry Community - version: 0.109.0 + version: 0.114.0 webhookdefinitions: - admissionReviewVersions: - v1alpha1 diff --git a/bundle/openshift/manifests/opentelemetry.io_instrumentations.yaml b/bundle/openshift/manifests/opentelemetry.io_instrumentations.yaml index 76f050bf0d..d8077d3867 100644 --- a/bundle/openshift/manifests/opentelemetry.io_instrumentations.yaml +++ b/bundle/openshift/manifests/opentelemetry.io_instrumentations.yaml @@ -217,6 +217,118 @@ spec: type: object version: type: string + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -332,6 +444,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -409,6 +633,19 @@ spec: properties: endpoint: type: string + tls: + properties: + ca_file: + type: string + cert_file: + type: string + configMapName: + type: string + key_file: + type: string + secretName: + type: string + type: object type: object go: properties: @@ -513,6 +750,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -635,6 +984,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -813,6 +1274,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -923,6 +1496,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -1046,6 +1731,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer diff --git a/bundle/openshift/manifests/opentelemetry.io_opentelemetrycollectors.yaml b/bundle/openshift/manifests/opentelemetry.io_opentelemetrycollectors.yaml index 594e0f4aea..6ccb1c9e5f 100644 --- a/bundle/openshift/manifests/opentelemetry.io_opentelemetrycollectors.yaml +++ b/bundle/openshift/manifests/opentelemetry.io_opentelemetrycollectors.yaml @@ -6963,6 +6963,13 @@ spec: type: boolean type: object type: object + persistentVolumeClaimRetentionPolicy: + properties: + whenDeleted: + type: string + whenScaled: + type: string + type: object podAnnotations: additionalProperties: type: string diff --git a/cmd/otel-allocator/Dockerfile b/cmd/otel-allocator/Dockerfile index 2e57628925..26ed93dbe0 100644 --- a/cmd/otel-allocator/Dockerfile +++ b/cmd/otel-allocator/Dockerfile @@ -1,5 +1,5 @@ # Get CA certificates from the Alpine package repo -FROM alpine:3.20 as certificates +FROM alpine:3.20 AS certificates RUN apk --no-cache add ca-certificates @@ -8,7 +8,7 @@ FROM scratch ARG TARGETARCH -WORKDIR /root/ +WORKDIR / # Copy the certs COPY --from=certificates /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt @@ -16,4 +16,6 @@ COPY --from=certificates /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-ce # Copy binary built on the host COPY bin/targetallocator_${TARGETARCH} ./main +USER 65532:65532 + ENTRYPOINT ["./main"] diff --git a/cmd/otel-allocator/README.md b/cmd/otel-allocator/README.md index 0b10a85614..7b4741d42b 100644 --- a/cmd/otel-allocator/README.md +++ b/cmd/otel-allocator/README.md @@ -211,9 +211,42 @@ rules: ### Service / Pod monitor endpoint credentials -If your service or pod monitor endpoints require credentials or other supported form of authentication (bearer token, basic auth, OAuth2 etc.), you need to ensure that the collector has access to this information. Due to some limitations in how the endpoints configuration is handled, target allocator currently does **not** support credentials provided via secrets. It is only possible to provide credentials in a file (for more details see issue https://github.com/open-telemetry/opentelemetry-operator/issues/1669). +If your service or pod monitor endpoints require authentication (such as bearer tokens, basic auth, OAuth2, etc.), you must ensure that the collector has access to these credentials. + +To secure the connection between the target allocator and the collector so that the secrets can be retrieved, mTLS is used. This involves the use of cert-manager to manage the CA, server, and client certificates. + +Prerequisites: +- Ensure cert-manager is installed in your Kubernetes cluster. +- Grant RBAC Permissions: + + - The target allocator needs the appropriate RBAC permissions to get the secrets referenced in the Service / Pod monitor. + + - The operator needs the appropriate RBAC permissions to manage cert-manager resources. The following clusterRole can be used to grant the necessary permissions: + + ```yaml + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: opentelemetry-operator-controller-manager-cert-manager-role + rules: + - apiGroups: + - cert-manager.io + resources: + - issuers + - certificaterequests + - certificates + verbs: + - create + - get + - list + - watch + - update + - patch + - delete + ``` + +- Enable the `operator.targetallocator.mtls` feature gate in the operator's deployment. -In order to ensure your endpoints can be scraped, your collector instance needs to have the particular secret mounted as a file at the correct path. # Design diff --git a/cmd/otel-allocator/allocation/allocator.go b/cmd/otel-allocator/allocation/allocator.go index cbe5d1d31d..b0a9125ba9 100644 --- a/cmd/otel-allocator/allocation/allocator.go +++ b/cmd/otel-allocator/allocation/allocator.go @@ -76,6 +76,11 @@ func (a *allocator) SetFilter(filter Filter) { a.filter = filter } +// SetFallbackStrategy sets the fallback strategy to use. +func (a *allocator) SetFallbackStrategy(strategy Strategy) { + a.strategy.SetFallbackStrategy(strategy) +} + // SetTargets accepts a list of targets that will be used to make // load balancing decisions. This method should be called when there are // new targets discovered or existing targets are shutdown. diff --git a/cmd/otel-allocator/allocation/allocator_test.go b/cmd/otel-allocator/allocation/allocator_test.go index 55f2bb6dc6..e6c2b9693a 100644 --- a/cmd/otel-allocator/allocation/allocator_test.go +++ b/cmd/otel-allocator/allocation/allocator_test.go @@ -17,7 +17,7 @@ package allocation import ( "testing" - "github.com/prometheus/common/model" + "github.com/prometheus/prometheus/model/labels" "github.com/stretchr/testify/assert" "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target" @@ -176,11 +176,11 @@ func TestAllocationCollision(t *testing.T) { cols := MakeNCollectors(3, 0) allocator.SetCollectors(cols) - firstLabels := model.LabelSet{ - "test": "test1", + firstLabels := labels.Labels{ + {Name: "test", Value: "test1"}, } - secondLabels := model.LabelSet{ - "test": "test2", + secondLabels := labels.Labels{ + {Name: "test", Value: "test2"}, } firstTarget := target.NewItem("sample-name", "0.0.0.0:8000", firstLabels, "") secondTarget := target.NewItem("sample-name", "0.0.0.0:8000", secondLabels, "") diff --git a/cmd/otel-allocator/allocation/consistent_hashing.go b/cmd/otel-allocator/allocation/consistent_hashing.go index 8ec07ba857..c8a16903bc 100644 --- a/cmd/otel-allocator/allocation/consistent_hashing.go +++ b/cmd/otel-allocator/allocation/consistent_hashing.go @@ -16,7 +16,6 @@ package allocation import ( "fmt" - "strings" "github.com/buraksezer/consistent" "github.com/cespare/xxhash/v2" @@ -59,7 +58,7 @@ func (s *consistentHashingStrategy) GetName() string { } func (s *consistentHashingStrategy) GetCollectorForTarget(collectors map[string]*Collector, item *target.Item) (*Collector, error) { - hashKey := strings.Join(item.TargetURL, "") + hashKey := item.TargetURL member := s.consistentHasher.LocateKey([]byte(hashKey)) collectorName := member.String() collector, ok := collectors[collectorName] @@ -84,3 +83,5 @@ func (s *consistentHashingStrategy) SetCollectors(collectors map[string]*Collect s.consistentHasher = consistent.New(members, s.config) } + +func (s *consistentHashingStrategy) SetFallbackStrategy(fallbackStrategy Strategy) {} diff --git a/cmd/otel-allocator/allocation/least_weighted.go b/cmd/otel-allocator/allocation/least_weighted.go index caa2febbd9..49d935715d 100644 --- a/cmd/otel-allocator/allocation/least_weighted.go +++ b/cmd/otel-allocator/allocation/least_weighted.go @@ -54,3 +54,5 @@ func (s *leastWeightedStrategy) GetCollectorForTarget(collectors map[string]*Col } func (s *leastWeightedStrategy) SetCollectors(_ map[string]*Collector) {} + +func (s *leastWeightedStrategy) SetFallbackStrategy(fallbackStrategy Strategy) {} diff --git a/cmd/otel-allocator/allocation/per_node.go b/cmd/otel-allocator/allocation/per_node.go index a5e2bfa3f8..3d9c76d90d 100644 --- a/cmd/otel-allocator/allocation/per_node.go +++ b/cmd/otel-allocator/allocation/per_node.go @@ -25,21 +25,31 @@ const perNodeStrategyName = "per-node" var _ Strategy = &perNodeStrategy{} type perNodeStrategy struct { - collectorByNode map[string]*Collector + collectorByNode map[string]*Collector + fallbackStrategy Strategy } func newPerNodeStrategy() Strategy { return &perNodeStrategy{ - collectorByNode: make(map[string]*Collector), + collectorByNode: make(map[string]*Collector), + fallbackStrategy: nil, } } +func (s *perNodeStrategy) SetFallbackStrategy(fallbackStrategy Strategy) { + s.fallbackStrategy = fallbackStrategy +} + func (s *perNodeStrategy) GetName() string { return perNodeStrategyName } func (s *perNodeStrategy) GetCollectorForTarget(collectors map[string]*Collector, item *target.Item) (*Collector, error) { targetNodeName := item.GetNodeName() + if targetNodeName == "" && s.fallbackStrategy != nil { + return s.fallbackStrategy.GetCollectorForTarget(collectors, item) + } + collector, ok := s.collectorByNode[targetNodeName] if !ok { return nil, fmt.Errorf("could not find collector for node %s", targetNodeName) @@ -54,4 +64,8 @@ func (s *perNodeStrategy) SetCollectors(collectors map[string]*Collector) { s.collectorByNode[collector.NodeName] = collector } } + + if s.fallbackStrategy != nil { + s.fallbackStrategy.SetCollectors(collectors) + } } diff --git a/cmd/otel-allocator/allocation/per_node_test.go b/cmd/otel-allocator/allocation/per_node_test.go index d853574a11..4d17e6bbb3 100644 --- a/cmd/otel-allocator/allocation/per_node_test.go +++ b/cmd/otel-allocator/allocation/per_node_test.go @@ -17,7 +17,7 @@ package allocation import ( "testing" - "github.com/prometheus/common/model" + "github.com/prometheus/prometheus/model/labels" "github.com/stretchr/testify/assert" logf "sigs.k8s.io/controller-runtime/pkg/log" @@ -26,30 +26,40 @@ import ( var loggerPerNode = logf.Log.WithName("unit-tests") -// Tests that two targets with the same target url and job name but different label set are both added. +func GetTargetsWithNodeName(targets []*target.Item) (targetsWithNodeName []*target.Item) { + for _, item := range targets { + if item.GetNodeName() != "" { + targetsWithNodeName = append(targetsWithNodeName, item) + } + } + return targetsWithNodeName +} + +// Tests that four targets, with one of them lacking node labels, are assigned except for the +// target that lacks node labels. func TestAllocationPerNode(t *testing.T) { // prepare allocator with initial targets and collectors s, _ := New("per-node", loggerPerNode) cols := MakeNCollectors(4, 0) s.SetCollectors(cols) - firstLabels := model.LabelSet{ - "test": "test1", - "__meta_kubernetes_pod_node_name": "node-0", + firstLabels := labels.Labels{ + {Name: "test", Value: "test1"}, + {Name: "__meta_kubernetes_pod_node_name", Value: "node-0"}, } - secondLabels := model.LabelSet{ - "test": "test2", - "__meta_kubernetes_node_name": "node-1", + secondLabels := labels.Labels{ + {Name: "test", Value: "test2"}, + {Name: "__meta_kubernetes_node_name", Value: "node-1"}, } // no label, should be skipped - thirdLabels := model.LabelSet{ - "test": "test3", + thirdLabels := labels.Labels{ + {Name: "test", Value: "test3"}, } // endpointslice target kind and name - fourthLabels := model.LabelSet{ - "test": "test4", - "__meta_kubernetes_endpointslice_address_target_kind": "Node", - "__meta_kubernetes_endpointslice_address_target_name": "node-3", + fourthLabels := labels.Labels{ + {Name: "test", Value: "test4"}, + {Name: "__meta_kubernetes_endpointslice_address_target_kind", Value: "Node"}, + {Name: "__meta_kubernetes_endpointslice_address_target_name", Value: "node-3"}, } firstTarget := target.NewItem("sample-name", "0.0.0.0:8000", firstLabels, "") @@ -93,6 +103,77 @@ func TestAllocationPerNode(t *testing.T) { } } +// Tests that four targets, with one of them missing node labels, are all assigned. +func TestAllocationPerNodeUsingFallback(t *testing.T) { + // prepare allocator with initial targets and collectors + s, _ := New("per-node", loggerPerNode, WithFallbackStrategy(consistentHashingStrategyName)) + + cols := MakeNCollectors(4, 0) + s.SetCollectors(cols) + firstLabels := labels.Labels{ + {Name: "test", Value: "test1"}, + {Name: "__meta_kubernetes_pod_node_name", Value: "node-0"}, + } + secondLabels := labels.Labels{ + {Name: "test", Value: "test2"}, + {Name: "__meta_kubernetes_node_name", Value: "node-1"}, + } + // no label, should be allocated by the fallback strategy + thirdLabels := labels.Labels{ + {Name: "test", Value: "test3"}, + } + // endpointslice target kind and name + fourthLabels := labels.Labels{ + {Name: "test", Value: "test4"}, + {Name: "__meta_kubernetes_endpointslice_address_target_kind", Value: "Node"}, + {Name: "__meta_kubernetes_endpointslice_address_target_name", Value: "node-3"}, + } + + firstTarget := target.NewItem("sample-name", "0.0.0.0:8000", firstLabels, "") + secondTarget := target.NewItem("sample-name", "0.0.0.0:8000", secondLabels, "") + thirdTarget := target.NewItem("sample-name", "0.0.0.0:8000", thirdLabels, "") + fourthTarget := target.NewItem("sample-name", "0.0.0.0:8000", fourthLabels, "") + + targetList := map[string]*target.Item{ + firstTarget.Hash(): firstTarget, + secondTarget.Hash(): secondTarget, + thirdTarget.Hash(): thirdTarget, + fourthTarget.Hash(): fourthTarget, + } + + // test that targets and collectors are added properly + s.SetTargets(targetList) + + // verify length + actualItems := s.TargetItems() + + // all targets should be allocated + expectedTargetLen := len(targetList) + assert.Len(t, actualItems, expectedTargetLen) + + // verify allocation to nodes + for targetHash, item := range targetList { + actualItem, found := actualItems[targetHash] + + assert.True(t, found, "target with hash %s not found", item.Hash()) + + itemsForCollector := s.GetTargetsForCollectorAndJob(actualItem.CollectorName, actualItem.JobName) + + // first two should be assigned one to each collector; if third target, it should be assigned + // according to the fallback strategy which may assign it to the otherwise empty collector or + // one of the others, depending on the strategy and collector loop order + if targetHash == thirdTarget.Hash() { + assert.Empty(t, item.GetNodeName()) + assert.NotZero(t, len(itemsForCollector)) + continue + } + + // Only check targets that have been assigned using the per-node (not fallback) strategy here + assert.Len(t, GetTargetsWithNodeName(itemsForCollector), 1) + assert.Equal(t, actualItem, GetTargetsWithNodeName(itemsForCollector)[0]) + } +} + func TestTargetsWithNoCollectorsPerNode(t *testing.T) { // prepare allocator with initial targets and collectors c, _ := New("per-node", loggerPerNode) diff --git a/cmd/otel-allocator/allocation/strategy.go b/cmd/otel-allocator/allocation/strategy.go index 29ae7fd99a..47fafd5662 100644 --- a/cmd/otel-allocator/allocation/strategy.go +++ b/cmd/otel-allocator/allocation/strategy.go @@ -29,6 +29,8 @@ import ( type AllocatorProvider func(log logr.Logger, opts ...AllocationOption) Allocator var ( + strategies = map[string]Strategy{} + registry = map[string]AllocatorProvider{} // TargetsPerCollector records how many targets have been assigned to each collector. @@ -67,6 +69,16 @@ func WithFilter(filter Filter) AllocationOption { } } +func WithFallbackStrategy(fallbackStrategy string) AllocationOption { + var strategy, ok = strategies[fallbackStrategy] + if fallbackStrategy != "" && !ok { + panic(fmt.Errorf("unregistered strategy used as fallback: %s", fallbackStrategy)) + } + return func(allocator Allocator) { + allocator.SetFallbackStrategy(strategy) + } +} + func RecordTargetsKept(targets map[string]*target.Item) { targetsRemaining.Add(float64(len(targets))) } @@ -101,6 +113,7 @@ type Allocator interface { Collectors() map[string]*Collector GetTargetsForCollectorAndJob(collector string, job string) []*target.Item SetFilter(filter Filter) + SetFallbackStrategy(strategy Strategy) } type Strategy interface { @@ -110,6 +123,8 @@ type Strategy interface { // SetCollectors call. Strategies which don't need this information can just ignore it. SetCollectors(map[string]*Collector) GetName() string + // Add fallback strategy for strategies whose main allocation method can sometimes leave targets unassigned + SetFallbackStrategy(Strategy) } var _ consistent.Member = Collector{} @@ -136,22 +151,18 @@ func NewCollector(name, node string) *Collector { } func init() { - err := Register(leastWeightedStrategyName, func(log logr.Logger, opts ...AllocationOption) Allocator { - return newAllocator(log, newleastWeightedStrategy(), opts...) - }) - if err != nil { - panic(err) - } - err = Register(consistentHashingStrategyName, func(log logr.Logger, opts ...AllocationOption) Allocator { - return newAllocator(log, newConsistentHashingStrategy(), opts...) - }) - if err != nil { - panic(err) + strategies = map[string]Strategy{ + leastWeightedStrategyName: newleastWeightedStrategy(), + consistentHashingStrategyName: newConsistentHashingStrategy(), + perNodeStrategyName: newPerNodeStrategy(), } - err = Register(perNodeStrategyName, func(log logr.Logger, opts ...AllocationOption) Allocator { - return newAllocator(log, newPerNodeStrategy(), opts...) - }) - if err != nil { - panic(err) + + for strategyName, strategy := range strategies { + err := Register(strategyName, func(log logr.Logger, opts ...AllocationOption) Allocator { + return newAllocator(log, strategy, opts...) + }) + if err != nil { + panic(err) + } } } diff --git a/cmd/otel-allocator/allocation/testutils.go b/cmd/otel-allocator/allocation/testutils.go index 054e9e0205..3189b576c1 100644 --- a/cmd/otel-allocator/allocation/testutils.go +++ b/cmd/otel-allocator/allocation/testutils.go @@ -21,7 +21,7 @@ import ( "strconv" "testing" - "github.com/prometheus/common/model" + "github.com/prometheus/prometheus/model/labels" "github.com/stretchr/testify/require" logf "sigs.k8s.io/controller-runtime/pkg/log" @@ -39,9 +39,9 @@ func MakeNNewTargets(n int, numCollectors int, startingIndex int) map[string]*ta toReturn := map[string]*target.Item{} for i := startingIndex; i < n+startingIndex; i++ { collector := fmt.Sprintf("collector-%d", colIndex(i, numCollectors)) - label := model.LabelSet{ - "i": model.LabelValue(strconv.Itoa(i)), - "total": model.LabelValue(strconv.Itoa(n + startingIndex)), + label := labels.Labels{ + {Name: "i", Value: strconv.Itoa(i)}, + {Name: "total", Value: strconv.Itoa(n + startingIndex)}, } newTarget := target.NewItem(fmt.Sprintf("test-job-%d", i), fmt.Sprintf("test-url-%d", i), label, collector) toReturn[newTarget.Hash()] = newTarget @@ -65,10 +65,10 @@ func MakeNCollectors(n int, startingIndex int) map[string]*Collector { func MakeNNewTargetsWithEmptyCollectors(n int, startingIndex int) map[string]*target.Item { toReturn := map[string]*target.Item{} for i := startingIndex; i < n+startingIndex; i++ { - label := model.LabelSet{ - "i": model.LabelValue(strconv.Itoa(i)), - "total": model.LabelValue(strconv.Itoa(n + startingIndex)), - "__meta_kubernetes_pod_node_name": model.LabelValue("node-0"), + label := labels.Labels{ + {Name: "i", Value: strconv.Itoa(i)}, + {Name: "total", Value: strconv.Itoa(n + startingIndex)}, + {Name: "__meta_kubernetes_pod_node_name", Value: "node-0"}, } newTarget := target.NewItem(fmt.Sprintf("test-job-%d", i), fmt.Sprintf("test-url-%d", i), label, "") toReturn[newTarget.Hash()] = newTarget diff --git a/cmd/otel-allocator/benchmark_test.go b/cmd/otel-allocator/benchmark_test.go new file mode 100644 index 0000000000..7b6c644347 --- /dev/null +++ b/cmd/otel-allocator/benchmark_test.go @@ -0,0 +1,192 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package main + +import ( + "context" + "fmt" + "os" + "strconv" + "strings" + "testing" + + gokitlog "github.com/go-kit/log" + "github.com/go-logr/logr" + "github.com/prometheus/client_golang/prometheus" + "github.com/prometheus/common/model" + "github.com/prometheus/prometheus/discovery" + "github.com/prometheus/prometheus/discovery/targetgroup" + "github.com/prometheus/prometheus/model/labels" + "github.com/prometheus/prometheus/model/relabel" + "github.com/stretchr/testify/require" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/log" + + "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/allocation" + "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/prehook" + "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/server" + "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target" +) + +// BenchmarkProcessTargets benchmarks the whole target allocation pipeline. It starts with data the prometheus +// discovery manager would normally output, and pushes it all the way into the allocator. It notably doe *not* check +// the HTTP server afterward. Test data is chosen to be reasonably representative of what the Prometheus service discovery +// outputs in the real world. +func BenchmarkProcessTargets(b *testing.B) { + numTargets := 10000 + targetsPerGroup := 5 + groupsPerJob := 20 + tsets := prepareBenchmarkData(numTargets, targetsPerGroup, groupsPerJob) + labelsBuilder := labels.NewBuilder(labels.EmptyLabels()) + + b.ResetTimer() + for _, strategy := range allocation.GetRegisteredAllocatorNames() { + b.Run(strategy, func(b *testing.B) { + targetDiscoverer, allocator := createTestDiscoverer(strategy, map[string][]*relabel.Config{}) + for i := 0; i < b.N; i++ { + targetDiscoverer.ProcessTargets(labelsBuilder, tsets, allocator.SetTargets) + } + }) + } +} + +// BenchmarkProcessTargetsWithRelabelConfig is BenchmarkProcessTargets with a relabel config set. The relabel config +// does not actually modify any records, but does force the prehook to perform any necessary conversions along the way. +func BenchmarkProcessTargetsWithRelabelConfig(b *testing.B) { + numTargets := 10000 + targetsPerGroup := 5 + groupsPerJob := 20 + tsets := prepareBenchmarkData(numTargets, targetsPerGroup, groupsPerJob) + labelsBuilder := labels.NewBuilder(labels.EmptyLabels()) + prehookConfig := make(map[string][]*relabel.Config, len(tsets)) + for jobName := range tsets { + // keep all targets in half the jobs, drop the rest + jobNrStr := strings.Split(jobName, "-")[1] + jobNr, err := strconv.Atoi(jobNrStr) + require.NoError(b, err) + var action relabel.Action + if jobNr%2 == 0 { + action = "keep" + } else { + action = "drop" + } + prehookConfig[jobName] = []*relabel.Config{ + { + Action: action, + Regex: relabel.MustNewRegexp(".*"), + SourceLabels: model.LabelNames{"__address__"}, + }, + } + } + + b.ResetTimer() + for _, strategy := range allocation.GetRegisteredAllocatorNames() { + b.Run(strategy, func(b *testing.B) { + targetDiscoverer, allocator := createTestDiscoverer(strategy, prehookConfig) + for i := 0; i < b.N; i++ { + targetDiscoverer.ProcessTargets(labelsBuilder, tsets, allocator.SetTargets) + } + }) + } +} + +func prepareBenchmarkData(numTargets, targetsPerGroup, groupsPerJob int) map[string][]*targetgroup.Group { + numGroups := numTargets / targetsPerGroup + numJobs := numGroups / groupsPerJob + jobNamePrefix := "test-" + groupLabels := model.LabelSet{ + "__meta_kubernetes_pod_controller_name": "example", + "__meta_kubernetes_pod_ip": "10.244.0.251", + "__meta_kubernetes_pod_uid": "676ebee7-14f8-481e-a937-d2affaec4105", + "__meta_kubernetes_endpointslice_port_protocol": "TCP", + "__meta_kubernetes_endpointslice_endpoint_conditions_ready": "true", + "__meta_kubernetes_service_annotation_kubectl_kubernetes_io_last_applied_configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"example\"},\"name\":\"example-svc\",\"namespace\":\"example\"},\"spec\":{\"clusterIP\":\"None\",\"ports\":[{\"name\":\"http-example\",\"port\":9006,\"targetPort\":9006}],\"selector\":{\"app\":\"example\"},\"type\":\"ClusterIP\"}}\n", + "__meta_kubernetes_endpointslice_labelpresent_app": "true", + "__meta_kubernetes_endpointslice_name": "example-svc-qgwxf", + "__address__": "10.244.0.251:9006", + "__meta_kubernetes_endpointslice_endpoint_conditions_terminating": "false", + "__meta_kubernetes_pod_labelpresent_pod_template_hash": "true", + "__meta_kubernetes_endpointslice_label_kubernetes_io_service_name": "example-svc", + "__meta_kubernetes_endpointslice_labelpresent_service_kubernetes_io_headless": "true", + "__meta_kubernetes_pod_label_pod_template_hash": "6b549885f8", + "__meta_kubernetes_endpointslice_address_target_name": "example-6b549885f8-7tbcw", + "__meta_kubernetes_pod_labelpresent_app": "true", + "somelabel": "somevalue", + } + exampleTarget := model.LabelSet{ + "__meta_kubernetes_endpointslice_port": "9006", + "__meta_kubernetes_service_label_app": "example", + "__meta_kubernetes_endpointslice_port_name": "http-example", + "__meta_kubernetes_pod_ready": "true", + "__meta_kubernetes_endpointslice_address_type": "IPv4", + "__meta_kubernetes_endpointslice_label_endpointslice_kubernetes_io_managed_by": "endpointslice-controller.k8s.io", + "__meta_kubernetes_endpointslice_labelpresent_endpointslice_kubernetes_io_managed_by": "true", + "__meta_kubernetes_endpointslice_label_app": "example", + "__meta_kubernetes_endpointslice_endpoint_conditions_serving": "true", + "__meta_kubernetes_pod_phase": "Running", + "__meta_kubernetes_pod_controller_kind": "ReplicaSet", + "__meta_kubernetes_service_annotationpresent_kubectl_kubernetes_io_last_applied_configuration": "true", + "__meta_kubernetes_service_labelpresent_app": "true", + "__meta_kubernetes_endpointslice_labelpresent_kubernetes_io_service_name": "true", + "__meta_kubernetes_endpointslice_annotation_endpoints_kubernetes_io_last_change_trigger_time": "2023-09-27T16:01:29Z", + "__meta_kubernetes_pod_name": "example-6b549885f8-7tbcw", + "__meta_kubernetes_service_name": "example-svc", + "__meta_kubernetes_namespace": "example", + "__meta_kubernetes_endpointslice_annotationpresent_endpoints_kubernetes_io_last_change_trigger_time": "true", + "__meta_kubernetes_pod_node_name": "kind-control-plane", + "__meta_kubernetes_endpointslice_address_target_kind": "Pod", + "__meta_kubernetes_pod_host_ip": "172.18.0.2", + "__meta_kubernetes_endpointslice_label_service_kubernetes_io_headless": "", + "__meta_kubernetes_pod_label_app": "example", + } + targets := []model.LabelSet{} + for i := 0; i < numTargets; i++ { + targets = append(targets, exampleTarget.Clone()) + } + groups := make([]*targetgroup.Group, numGroups) + for i := 0; i < numGroups; i++ { + groupTargets := targets[(i * targetsPerGroup):(i*targetsPerGroup + targetsPerGroup)] + groups[i] = &targetgroup.Group{ + Labels: groupLabels, + Targets: groupTargets, + } + } + tsets := make(map[string][]*targetgroup.Group, numJobs) + for i := 0; i < numJobs; i++ { + jobGroups := groups[(i * groupsPerJob):(i*groupsPerJob + groupsPerJob)] + jobName := fmt.Sprintf("%s%d", jobNamePrefix, i) + tsets[jobName] = jobGroups + } + return tsets +} + +func createTestDiscoverer(allocationStrategy string, prehookConfig map[string][]*relabel.Config) (*target.Discoverer, allocation.Allocator) { + ctx := context.Background() + logger := ctrl.Log.WithName(fmt.Sprintf("bench-%s", allocationStrategy)) + ctrl.SetLogger(logr.New(log.NullLogSink{})) + allocatorPrehook := prehook.New("relabel-config", logger) + allocatorPrehook.SetConfig(prehookConfig) + allocator, err := allocation.New(allocationStrategy, logger, allocation.WithFilter(allocatorPrehook)) + srv := server.NewServer(logger, allocator, "localhost:0") + if err != nil { + setupLog.Error(err, "Unable to initialize allocation strategy") + os.Exit(1) + } + registry := prometheus.NewRegistry() + sdMetrics, _ := discovery.CreateAndRegisterSDMetrics(registry) + discoveryManager := discovery.NewManager(ctx, gokitlog.NewNopLogger(), registry, sdMetrics) + targetDiscoverer := target.NewDiscoverer(logger, discoveryManager, allocatorPrehook, srv) + return targetDiscoverer, allocator +} diff --git a/cmd/otel-allocator/config/config.go b/cmd/otel-allocator/config/config.go index 3e3fd389c7..ee55fe0a32 100644 --- a/cmd/otel-allocator/config/config.go +++ b/cmd/otel-allocator/config/config.go @@ -46,24 +46,29 @@ const ( ) type Config struct { - ListenAddr string `yaml:"listen_addr,omitempty"` - KubeConfigFilePath string `yaml:"kube_config_file_path,omitempty"` - ClusterConfig *rest.Config `yaml:"-"` - RootLogger logr.Logger `yaml:"-"` - CollectorSelector *metav1.LabelSelector `yaml:"collector_selector,omitempty"` - PromConfig *promconfig.Config `yaml:"config"` - AllocationStrategy string `yaml:"allocation_strategy,omitempty"` - FilterStrategy string `yaml:"filter_strategy,omitempty"` - PrometheusCR PrometheusCRConfig `yaml:"prometheus_cr,omitempty"` - HTTPS HTTPSServerConfig `yaml:"https,omitempty"` + ListenAddr string `yaml:"listen_addr,omitempty"` + KubeConfigFilePath string `yaml:"kube_config_file_path,omitempty"` + ClusterConfig *rest.Config `yaml:"-"` + RootLogger logr.Logger `yaml:"-"` + CollectorSelector *metav1.LabelSelector `yaml:"collector_selector,omitempty"` + PromConfig *promconfig.Config `yaml:"config"` + AllocationStrategy string `yaml:"allocation_strategy,omitempty"` + AllocationFallbackStrategy string `yaml:"allocation_fallback_strategy,omitempty"` + FilterStrategy string `yaml:"filter_strategy,omitempty"` + PrometheusCR PrometheusCRConfig `yaml:"prometheus_cr,omitempty"` + HTTPS HTTPSServerConfig `yaml:"https,omitempty"` } type PrometheusCRConfig struct { Enabled bool `yaml:"enabled,omitempty"` PodMonitorSelector *metav1.LabelSelector `yaml:"pod_monitor_selector,omitempty"` + PodMonitorNamespaceSelector *metav1.LabelSelector `yaml:"pod_monitor_namespace_selector,omitempty"` ServiceMonitorSelector *metav1.LabelSelector `yaml:"service_monitor_selector,omitempty"` ServiceMonitorNamespaceSelector *metav1.LabelSelector `yaml:"service_monitor_namespace_selector,omitempty"` - PodMonitorNamespaceSelector *metav1.LabelSelector `yaml:"pod_monitor_namespace_selector,omitempty"` + ScrapeConfigSelector *metav1.LabelSelector `yaml:"scrape_config_selector,omitempty"` + ScrapeConfigNamespaceSelector *metav1.LabelSelector `yaml:"scrape_config_namespace_selector,omitempty"` + ProbeSelector *metav1.LabelSelector `yaml:"probe_selector,omitempty"` + ProbeNamespaceSelector *metav1.LabelSelector `yaml:"probe_namespace_selector,omitempty"` ScrapeInterval model.Duration `yaml:"scrape_interval,omitempty"` } @@ -115,29 +120,34 @@ func LoadFromCLI(target *Config, flagSet *pflag.FlagSet) error { target.PrometheusCR.Enabled = prometheusCREnabled } - target.HTTPS.Enabled, err = getHttpsEnabled(flagSet) - if err != nil { + if httpsEnabled, changed, err := getHttpsEnabled(flagSet); err != nil { return err + } else if changed { + target.HTTPS.Enabled = httpsEnabled } - target.HTTPS.ListenAddr, err = getHttpsListenAddr(flagSet) - if err != nil { + if listenAddrHttps, changed, err := getHttpsListenAddr(flagSet); err != nil { return err + } else if changed { + target.HTTPS.ListenAddr = listenAddrHttps } - target.HTTPS.CAFilePath, err = getHttpsCAFilePath(flagSet) - if err != nil { + if caFilePath, changed, err := getHttpsCAFilePath(flagSet); err != nil { return err + } else if changed { + target.HTTPS.CAFilePath = caFilePath } - target.HTTPS.TLSCertFilePath, err = getHttpsTLSCertFilePath(flagSet) - if err != nil { + if tlsCertFilePath, changed, err := getHttpsTLSCertFilePath(flagSet); err != nil { return err + } else if changed { + target.HTTPS.TLSCertFilePath = tlsCertFilePath } - target.HTTPS.TLSKeyFilePath, err = getHttpsTLSKeyFilePath(flagSet) - if err != nil { + if tlsKeyFilePath, changed, err := getHttpsTLSKeyFilePath(flagSet); err != nil { return err + } else if changed { + target.HTTPS.TLSKeyFilePath = tlsKeyFilePath } return nil @@ -156,8 +166,9 @@ func unmarshal(cfg *Config, configFile string) error { func CreateDefaultConfig() Config { return Config{ - AllocationStrategy: DefaultAllocationStrategy, - FilterStrategy: DefaultFilterStrategy, + AllocationStrategy: DefaultAllocationStrategy, + AllocationFallbackStrategy: "", + FilterStrategy: DefaultFilterStrategy, PrometheusCR: PrometheusCRConfig{ ScrapeInterval: DefaultCRScrapeInterval, }, diff --git a/cmd/otel-allocator/config/config_test.go b/cmd/otel-allocator/config/config_test.go index 53ddc52a49..c1b721b773 100644 --- a/cmd/otel-allocator/config/config_test.go +++ b/cmd/otel-allocator/config/config_test.go @@ -64,6 +64,7 @@ func TestLoad(t *testing.T) { }, HTTPS: HTTPSServerConfig{ Enabled: true, + ListenAddr: ":8443", CAFilePath: "/path/to/ca.pem", TLSCertFilePath: "/path/to/cert.pem", TLSKeyFilePath: "/path/to/key.pem", diff --git a/cmd/otel-allocator/config/flags.go b/cmd/otel-allocator/config/flags.go index 5b3a3705db..0a47c27636 100644 --- a/cmd/otel-allocator/config/flags.go +++ b/cmd/otel-allocator/config/flags.go @@ -78,22 +78,47 @@ func getPrometheusCREnabled(flagSet *pflag.FlagSet) (value bool, changed bool, e return } -func getHttpsListenAddr(flagSet *pflag.FlagSet) (string, error) { - return flagSet.GetString(listenAddrHttpsFlagName) +func getHttpsListenAddr(flagSet *pflag.FlagSet) (value string, changed bool, err error) { + if changed = flagSet.Changed(listenAddrHttpsFlagName); !changed { + value, err = ":8443", nil + return + } + value, err = flagSet.GetString(listenAddrHttpsFlagName) + return } -func getHttpsEnabled(flagSet *pflag.FlagSet) (bool, error) { - return flagSet.GetBool(httpsEnabledFlagName) +func getHttpsEnabled(flagSet *pflag.FlagSet) (value bool, changed bool, err error) { + if changed = flagSet.Changed(httpsEnabledFlagName); !changed { + value, err = false, nil + return + } + value, err = flagSet.GetBool(httpsEnabledFlagName) + return } -func getHttpsCAFilePath(flagSet *pflag.FlagSet) (string, error) { - return flagSet.GetString(httpsCAFilePathFlagName) +func getHttpsCAFilePath(flagSet *pflag.FlagSet) (value string, changed bool, err error) { + if changed = flagSet.Changed(httpsCAFilePathFlagName); !changed { + value, err = "", nil + return + } + value, err = flagSet.GetString(httpsCAFilePathFlagName) + return } -func getHttpsTLSCertFilePath(flagSet *pflag.FlagSet) (string, error) { - return flagSet.GetString(httpsTLSCertFilePathFlagName) +func getHttpsTLSCertFilePath(flagSet *pflag.FlagSet) (value string, changed bool, err error) { + if changed = flagSet.Changed(httpsTLSCertFilePathFlagName); !changed { + value, err = "", nil + return + } + value, err = flagSet.GetString(httpsTLSCertFilePathFlagName) + return } -func getHttpsTLSKeyFilePath(flagSet *pflag.FlagSet) (string, error) { - return flagSet.GetString(httpsTLSKeyFilePathFlagName) +func getHttpsTLSKeyFilePath(flagSet *pflag.FlagSet) (value string, changed bool, err error) { + if changed = flagSet.Changed(httpsTLSKeyFilePathFlagName); !changed { + value, err = "", nil + return + } + value, err = flagSet.GetString(httpsTLSKeyFilePathFlagName) + return } diff --git a/cmd/otel-allocator/config/flags_test.go b/cmd/otel-allocator/config/flags_test.go index 2c33d65017..b2725c170e 100644 --- a/cmd/otel-allocator/config/flags_test.go +++ b/cmd/otel-allocator/config/flags_test.go @@ -77,13 +77,19 @@ func TestFlagGetters(t *testing.T) { name: "HttpsServer", flagArgs: []string{"--" + httpsEnabledFlagName, "true"}, expectedValue: true, - getterFunc: func(fs *pflag.FlagSet) (interface{}, error) { return getHttpsEnabled(fs) }, + getterFunc: func(fs *pflag.FlagSet) (interface{}, error) { + value, _, err := getHttpsEnabled(fs) + return value, err + }, }, { name: "HttpsServerKey", flagArgs: []string{"--" + httpsTLSKeyFilePathFlagName, "/path/to/tls.key"}, expectedValue: "/path/to/tls.key", - getterFunc: func(fs *pflag.FlagSet) (interface{}, error) { return getHttpsTLSKeyFilePath(fs) }, + getterFunc: func(fs *pflag.FlagSet) (interface{}, error) { + value, _, err := getHttpsTLSKeyFilePath(fs) + return value, err + }, }, } diff --git a/cmd/otel-allocator/config/testdata/config_test.yaml b/cmd/otel-allocator/config/testdata/config_test.yaml index bcb220adf8..47a3226517 100644 --- a/cmd/otel-allocator/config/testdata/config_test.yaml +++ b/cmd/otel-allocator/config/testdata/config_test.yaml @@ -7,6 +7,7 @@ prometheus_cr: scrape_interval: 60s https: enabled: true + listen_addr: :8443 ca_file_path: /path/to/ca.pem tls_cert_file_path: /path/to/cert.pem tls_key_file_path: /path/to/key.pem diff --git a/cmd/otel-allocator/main.go b/cmd/otel-allocator/main.go index f9531d6740..be2418902e 100644 --- a/cmd/otel-allocator/main.go +++ b/cmd/otel-allocator/main.go @@ -81,7 +81,13 @@ func main() { log := ctrl.Log.WithName("allocator") allocatorPrehook = prehook.New(cfg.FilterStrategy, log) - allocator, err = allocation.New(cfg.AllocationStrategy, log, allocation.WithFilter(allocatorPrehook)) + + var allocationOptions []allocation.AllocationOption + allocationOptions = append(allocationOptions, allocation.WithFilter(allocatorPrehook)) + if cfg.AllocationFallbackStrategy != "" { + allocationOptions = append(allocationOptions, allocation.WithFallbackStrategy(cfg.AllocationFallbackStrategy)) + } + allocator, err = allocation.New(cfg.AllocationStrategy, log, allocationOptions...) if err != nil { setupLog.Error(err, "Unable to initialize allocation strategy") os.Exit(1) diff --git a/cmd/otel-allocator/prehook/relabel.go b/cmd/otel-allocator/prehook/relabel.go index 3595cb888e..6c96affa39 100644 --- a/cmd/otel-allocator/prehook/relabel.go +++ b/cmd/otel-allocator/prehook/relabel.go @@ -16,8 +16,6 @@ package prehook import ( "github.com/go-logr/logr" - "github.com/prometheus/common/model" - "github.com/prometheus/prometheus/model/labels" "github.com/prometheus/prometheus/model/relabel" "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target" @@ -35,18 +33,6 @@ func NewRelabelConfigTargetFilter(log logr.Logger) Hook { } } -// helper function converts from model.LabelSet to []labels.Label. -func convertLabelToPromLabelSet(lbls model.LabelSet) []labels.Label { - newLabels := make([]labels.Label, len(lbls)) - index := 0 - for k, v := range lbls { - newLabels[index].Name = string(k) - newLabels[index].Value = string(v) - index++ - } - return newLabels -} - func (tf *RelabelConfigTargetFilter) Apply(targets map[string]*target.Item) map[string]*target.Item { numTargets := len(targets) @@ -57,20 +43,15 @@ func (tf *RelabelConfigTargetFilter) Apply(targets map[string]*target.Item) map[ // Note: jobNameKey != tItem.JobName (jobNameKey is hashed) for jobNameKey, tItem := range targets { - keepTarget := true - lset := convertLabelToPromLabelSet(tItem.Labels) + var keepTarget bool + lset := tItem.Labels for _, cfg := range tf.relabelCfg[tItem.JobName] { - if newLset, keep := relabel.Process(lset, cfg); !keep { - keepTarget = false + lset, keepTarget = relabel.Process(lset, cfg) + if !keepTarget { + delete(targets, jobNameKey) break // inner loop - } else { - lset = newLset } } - - if !keepTarget { - delete(targets, jobNameKey) - } } tf.log.V(2).Info("Filtering complete", "seen", numTargets, "kept", len(targets)) diff --git a/cmd/otel-allocator/prehook/relabel_test.go b/cmd/otel-allocator/prehook/relabel_test.go index d30f645eba..9aa27764ca 100644 --- a/cmd/otel-allocator/prehook/relabel_test.go +++ b/cmd/otel-allocator/prehook/relabel_test.go @@ -22,6 +22,7 @@ import ( "testing" "github.com/prometheus/common/model" + "github.com/prometheus/prometheus/model/labels" "github.com/prometheus/prometheus/model/relabel" "github.com/stretchr/testify/assert" logf "sigs.k8s.io/controller-runtime/pkg/log" @@ -184,10 +185,10 @@ func makeNNewTargets(rCfgs []relabelConfigObj, n int, numCollectors int, startin relabelConfig := make(map[string][]*relabel.Config) for i := startingIndex; i < n+startingIndex; i++ { collector := fmt.Sprintf("collector-%d", colIndex(i, numCollectors)) - label := model.LabelSet{ - "collector": model.LabelValue(collector), - "i": model.LabelValue(strconv.Itoa(i)), - "total": model.LabelValue(strconv.Itoa(n + startingIndex)), + label := labels.Labels{ + {Name: "collector", Value: collector}, + {Name: "i", Value: strconv.Itoa(i)}, + {Name: "total", Value: strconv.Itoa(n + startingIndex)}, } jobName := fmt.Sprintf("test-job-%d", i) newTarget := target.NewItem(jobName, "test-url", label, collector) diff --git a/cmd/otel-allocator/server/bench_test.go b/cmd/otel-allocator/server/bench_test.go index 8fcea90b0e..d441fd8e2c 100644 --- a/cmd/otel-allocator/server/bench_test.go +++ b/cmd/otel-allocator/server/bench_test.go @@ -24,6 +24,7 @@ import ( "github.com/gin-gonic/gin" "github.com/prometheus/common/model" promconfig "github.com/prometheus/prometheus/config" + "github.com/prometheus/prometheus/model/labels" "github.com/stretchr/testify/assert" "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/allocation" @@ -198,7 +199,7 @@ func BenchmarkTargetItemsJSONHandler(b *testing.B) { }, } for _, tc := range tests { - data := makeNTargetItems(*random, tc.numTargets, tc.numLabels) + data := makeNTargetJSON(*random, tc.numTargets, tc.numLabels) b.Run(fmt.Sprintf("%d_targets_%d_labels", tc.numTargets, tc.numLabels), func(b *testing.B) { b.ReportAllocs() for i := 0; i < b.N; i++ { @@ -242,29 +243,39 @@ func makeNCollectorJSON(random rand.Rand, numCollectors, numItems int) map[strin for i := 0; i < numCollectors; i++ { items[randSeq(random, 20)] = collectorJSON{ Link: randSeq(random, 120), - Jobs: makeNTargetItems(random, numItems, 50), + Jobs: makeNTargetJSON(random, numItems, 50), } } return items } func makeNTargetItems(random rand.Rand, numItems, numLabels int) []*target.Item { + builder := labels.NewBuilder(labels.EmptyLabels()) items := make([]*target.Item, 0, numItems) for i := 0; i < numItems; i++ { items = append(items, target.NewItem( randSeq(random, 80), randSeq(random, 150), - makeNNewLabels(random, numLabels), + makeNNewLabels(builder, random, numLabels), randSeq(random, 30), )) } return items } -func makeNNewLabels(random rand.Rand, n int) model.LabelSet { - labels := make(map[model.LabelName]model.LabelValue, n) +func makeNTargetJSON(random rand.Rand, numItems, numLabels int) []*targetJSON { + items := makeNTargetItems(random, numItems, numLabels) + targets := make([]*targetJSON, numItems) + for i := 0; i < numItems; i++ { + targets[i] = targetJsonFromTargetItem(items[i]) + } + return targets +} + +func makeNNewLabels(builder *labels.Builder, random rand.Rand, n int) labels.Labels { + builder.Reset(labels.EmptyLabels()) for i := 0; i < n; i++ { - labels[model.LabelName(randSeq(random, 20))] = model.LabelValue(randSeq(random, 20)) + builder.Set(randSeq(random, 20), randSeq(random, 20)) } - return labels + return builder.Labels() } diff --git a/cmd/otel-allocator/server/mocks_test.go b/cmd/otel-allocator/server/mocks_test.go index e44b178fa8..8620d70367 100644 --- a/cmd/otel-allocator/server/mocks_test.go +++ b/cmd/otel-allocator/server/mocks_test.go @@ -32,6 +32,7 @@ func (m *mockAllocator) SetTargets(_ map[string]*target.Item) func (m *mockAllocator) Collectors() map[string]*allocation.Collector { return nil } func (m *mockAllocator) GetTargetsForCollectorAndJob(_ string, _ string) []*target.Item { return nil } func (m *mockAllocator) SetFilter(_ allocation.Filter) {} +func (m *mockAllocator) SetFallbackStrategy(_ allocation.Strategy) {} func (m *mockAllocator) TargetItems() map[string]*target.Item { return m.targetItems diff --git a/cmd/otel-allocator/server/server.go b/cmd/otel-allocator/server/server.go index 33e845103f..2e9df9a8b0 100644 --- a/cmd/otel-allocator/server/server.go +++ b/cmd/otel-allocator/server/server.go @@ -35,6 +35,7 @@ import ( "github.com/prometheus/client_golang/prometheus/promhttp" promcommconfig "github.com/prometheus/common/config" promconfig "github.com/prometheus/prometheus/config" + "github.com/prometheus/prometheus/model/labels" "gopkg.in/yaml.v2" "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/allocation" @@ -57,8 +58,17 @@ var ( ) type collectorJSON struct { - Link string `json:"_link"` - Jobs []*target.Item `json:"targets"` + Link string `json:"_link"` + Jobs []*targetJSON `json:"targets"` +} + +type linkJSON struct { + Link string `json:"_link"` +} + +type targetJSON struct { + TargetURL []string `json:"targets"` + Labels labels.Labels `json:"labels"` } type Server struct { @@ -263,9 +273,9 @@ func (s *Server) ReadinessProbeHandler(c *gin.Context) { } func (s *Server) JobHandler(c *gin.Context) { - displayData := make(map[string]target.LinkJSON) + displayData := make(map[string]linkJSON) for _, v := range s.allocator.TargetItems() { - displayData[v.JobName] = target.LinkJSON{Link: v.Link.Link} + displayData[v.JobName] = linkJSON{Link: fmt.Sprintf("/jobs/%s/targets", url.QueryEscape(v.JobName))} } s.jsonHandler(c.Writer, displayData) } @@ -294,16 +304,16 @@ func (s *Server) TargetsHandler(c *gin.Context) { if len(q) == 0 { displayData := GetAllTargetsByJob(s.allocator, jobId) s.jsonHandler(c.Writer, displayData) - } else { - tgs := s.allocator.GetTargetsForCollectorAndJob(q[0], jobId) + targets := GetAllTargetsByCollectorAndJob(s.allocator, q[0], jobId) // Displays empty list if nothing matches - if len(tgs) == 0 { + if len(targets) == 0 { s.jsonHandler(c.Writer, []interface{}{}) return } - s.jsonHandler(c.Writer, tgs) + s.jsonHandler(c.Writer, targets) } + } func (s *Server) errorHandler(w http.ResponseWriter, err error) { @@ -323,12 +333,25 @@ func (s *Server) jsonHandler(w http.ResponseWriter, data interface{}) { func GetAllTargetsByJob(allocator allocation.Allocator, job string) map[string]collectorJSON { displayData := make(map[string]collectorJSON) for _, col := range allocator.Collectors() { - items := allocator.GetTargetsForCollectorAndJob(col.Name, job) - displayData[col.Name] = collectorJSON{Link: fmt.Sprintf("/jobs/%s/targets?collector_id=%s", url.QueryEscape(job), col.Name), Jobs: items} + targets := GetAllTargetsByCollectorAndJob(allocator, col.Name, job) + displayData[col.Name] = collectorJSON{ + Link: fmt.Sprintf("/jobs/%s/targets?collector_id=%s", url.QueryEscape(job), col.Name), + Jobs: targets, + } } return displayData } +// GetAllTargetsByCollector returns all the targets for a given collector and job. +func GetAllTargetsByCollectorAndJob(allocator allocation.Allocator, collectorName string, jobName string) []*targetJSON { + items := allocator.GetTargetsForCollectorAndJob(collectorName, jobName) + targets := make([]*targetJSON, len(items)) + for i, item := range items { + targets[i] = targetJsonFromTargetItem(item) + } + return targets +} + // registerPprof registers the pprof handlers and either serves the requested // specific profile or falls back to index handler. func registerPprof(g *gin.RouterGroup) { @@ -348,3 +371,10 @@ func registerPprof(g *gin.RouterGroup) { } }) } + +func targetJsonFromTargetItem(item *target.Item) *targetJSON { + return &targetJSON{ + TargetURL: []string{item.TargetURL}, + Labels: item.Labels, + } +} diff --git a/cmd/otel-allocator/server/server_test.go b/cmd/otel-allocator/server/server_test.go index 88b8ad9368..4bc403251c 100644 --- a/cmd/otel-allocator/server/server_test.go +++ b/cmd/otel-allocator/server/server_test.go @@ -28,6 +28,7 @@ import ( "github.com/prometheus/common/config" "github.com/prometheus/common/model" promconfig "github.com/prometheus/prometheus/config" + "github.com/prometheus/prometheus/model/labels" "github.com/prometheus/prometheus/model/relabel" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -41,11 +42,11 @@ import ( var ( logger = logf.Log.WithName("server-unit-tests") - baseLabelSet = model.LabelSet{ - "test_label": "test-value", + baseLabelSet = labels.Labels{ + {Name: "test_label", Value: "test-value"}, } - testJobLabelSetTwo = model.LabelSet{ - "test_label": "test-value2", + testJobLabelSetTwo = labels.Labels{ + {Name: "test_label", Value: "test-value2"}, } baseTargetItem = target.NewItem("test-job", "test-url", baseLabelSet, "test-collector") secondTargetItem = target.NewItem("test-job", "test-url", baseLabelSet, "test-collector") @@ -74,7 +75,7 @@ func TestServer_TargetsHandler(t *testing.T) { allocator allocation.Allocator } type want struct { - items []*target.Item + items []*targetJSON errString string } tests := []struct { @@ -91,7 +92,7 @@ func TestServer_TargetsHandler(t *testing.T) { allocator: leastWeighted, }, want: want{ - items: []*target.Item{}, + items: []*targetJSON{}, }, }, { @@ -105,11 +106,11 @@ func TestServer_TargetsHandler(t *testing.T) { allocator: leastWeighted, }, want: want{ - items: []*target.Item{ + items: []*targetJSON{ { TargetURL: []string{"test-url"}, - Labels: map[model.LabelName]model.LabelValue{ - "test_label": "test-value", + Labels: labels.Labels{ + {Name: "test_label", Value: "test-value"}, }, }, }, @@ -127,11 +128,11 @@ func TestServer_TargetsHandler(t *testing.T) { allocator: leastWeighted, }, want: want{ - items: []*target.Item{ + items: []*targetJSON{ { TargetURL: []string{"test-url"}, - Labels: map[model.LabelName]model.LabelValue{ - "test_label": "test-value", + Labels: labels.Labels{ + {Name: "test_label", Value: "test-value"}, }, }, }, @@ -149,17 +150,17 @@ func TestServer_TargetsHandler(t *testing.T) { allocator: leastWeighted, }, want: want{ - items: []*target.Item{ + items: []*targetJSON{ { TargetURL: []string{"test-url"}, - Labels: map[model.LabelName]model.LabelValue{ - "test_label": "test-value", + Labels: labels.Labels{ + {Name: "test_label", Value: "test-value"}, }, }, { TargetURL: []string{"test-url2"}, - Labels: map[model.LabelName]model.LabelValue{ - "test_label": "test-value2", + Labels: labels.Labels{ + {Name: "test_label", Value: "test-value2"}, }, }, }, @@ -186,7 +187,7 @@ func TestServer_TargetsHandler(t *testing.T) { assert.EqualError(t, err, tt.want.errString) return } - var itemResponse []*target.Item + var itemResponse []*targetJSON err = json.Unmarshal(bodyBytes, &itemResponse) assert.NoError(t, err) assert.ElementsMatch(t, tt.want.items, itemResponse) @@ -555,40 +556,40 @@ func TestServer_JobHandler(t *testing.T) { description string targetItems map[string]*target.Item expectedCode int - expectedJobs map[string]target.LinkJSON + expectedJobs map[string]linkJSON }{ { description: "nil jobs", targetItems: nil, expectedCode: http.StatusOK, - expectedJobs: make(map[string]target.LinkJSON), + expectedJobs: make(map[string]linkJSON), }, { description: "empty jobs", targetItems: map[string]*target.Item{}, expectedCode: http.StatusOK, - expectedJobs: make(map[string]target.LinkJSON), + expectedJobs: make(map[string]linkJSON), }, { description: "one job", targetItems: map[string]*target.Item{ - "targetitem": target.NewItem("job1", "", model.LabelSet{}, ""), + "targetitem": target.NewItem("job1", "", labels.Labels{}, ""), }, expectedCode: http.StatusOK, - expectedJobs: map[string]target.LinkJSON{ + expectedJobs: map[string]linkJSON{ "job1": newLink("job1"), }, }, { description: "multiple jobs", targetItems: map[string]*target.Item{ - "a": target.NewItem("job1", "", model.LabelSet{}, ""), - "b": target.NewItem("job2", "", model.LabelSet{}, ""), - "c": target.NewItem("job3", "", model.LabelSet{}, ""), - "d": target.NewItem("job3", "", model.LabelSet{}, ""), - "e": target.NewItem("job3", "", model.LabelSet{}, "")}, + "a": target.NewItem("job1", "", labels.Labels{}, ""), + "b": target.NewItem("job2", "", labels.Labels{}, ""), + "c": target.NewItem("job3", "", labels.Labels{}, ""), + "d": target.NewItem("job3", "", labels.Labels{}, ""), + "e": target.NewItem("job3", "", labels.Labels{}, "")}, expectedCode: http.StatusOK, - expectedJobs: map[string]target.LinkJSON{ + expectedJobs: map[string]linkJSON{ "job1": newLink("job1"), "job2": newLink("job2"), "job3": newLink("job3"), @@ -609,7 +610,7 @@ func TestServer_JobHandler(t *testing.T) { assert.Equal(t, tc.expectedCode, result.StatusCode) bodyBytes, err := io.ReadAll(result.Body) require.NoError(t, err) - jobs := map[string]target.LinkJSON{} + jobs := map[string]linkJSON{} err = json.Unmarshal(bodyBytes, &jobs) require.NoError(t, err) assert.Equal(t, tc.expectedJobs, jobs) @@ -737,6 +738,6 @@ func TestServer_ScrapeConfigRespose(t *testing.T) { } } -func newLink(jobName string) target.LinkJSON { - return target.LinkJSON{Link: fmt.Sprintf("/jobs/%s/targets", url.QueryEscape(jobName))} +func newLink(jobName string) linkJSON { + return linkJSON{Link: fmt.Sprintf("/jobs/%s/targets", url.QueryEscape(jobName))} } diff --git a/cmd/otel-allocator/target/discovery.go b/cmd/otel-allocator/target/discovery.go index d7dcb4e127..eb7498e5ad 100644 --- a/cmd/otel-allocator/target/discovery.go +++ b/cmd/otel-allocator/target/discovery.go @@ -24,6 +24,8 @@ import ( "github.com/prometheus/common/model" promconfig "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/discovery" + "github.com/prometheus/prometheus/discovery/targetgroup" + "github.com/prometheus/prometheus/model/labels" "github.com/prometheus/prometheus/model/relabel" "gopkg.in/yaml.v3" @@ -104,28 +106,42 @@ func (m *Discoverer) ApplyConfig(source allocatorWatcher.EventSource, scrapeConf } func (m *Discoverer) Watch(fn func(targets map[string]*Item)) error { + labelsBuilder := labels.NewBuilder(labels.EmptyLabels()) for { select { case <-m.close: m.log.Info("Service Discovery watch event stopped: discovery manager closed") return nil case tsets := <-m.manager.SyncCh(): - targets := map[string]*Item{} - - for jobName, tgs := range tsets { - var count float64 = 0 - for _, tg := range tgs { - for _, t := range tg.Targets { - count++ - item := NewItem(jobName, string(t[model.AddressLabel]), t.Merge(tg.Labels), "") - targets[item.Hash()] = item - } + m.ProcessTargets(labelsBuilder, tsets, fn) + } + } +} + +func (m *Discoverer) ProcessTargets(builder *labels.Builder, tsets map[string][]*targetgroup.Group, fn func(targets map[string]*Item)) { + targets := map[string]*Item{} + + for jobName, tgs := range tsets { + var count float64 = 0 + for _, tg := range tgs { + builder.Reset(labels.EmptyLabels()) + for ln, lv := range tg.Labels { + builder.Set(string(ln), string(lv)) + } + groupLabels := builder.Labels() + for _, t := range tg.Targets { + count++ + builder.Reset(groupLabels) + for ln, lv := range t { + builder.Set(string(ln), string(lv)) } - targetsDiscovered.WithLabelValues(jobName).Set(count) + item := NewItem(jobName, string(t[model.AddressLabel]), builder.Labels(), "") + targets[item.Hash()] = item } - fn(targets) } + targetsDiscovered.WithLabelValues(jobName).Set(count) } + fn(targets) } func (m *Discoverer) Close() { diff --git a/cmd/otel-allocator/target/discovery_test.go b/cmd/otel-allocator/target/discovery_test.go index f773b295c0..7eb2883ee9 100644 --- a/cmd/otel-allocator/target/discovery_test.go +++ b/cmd/otel-allocator/target/discovery_test.go @@ -87,7 +87,7 @@ func TestDiscovery(t *testing.T) { err := manager.Watch(func(targets map[string]*Item) { var result []string for _, t := range targets { - result = append(result, t.TargetURL[0]) + result = append(result, t.TargetURL) } results <- result }) diff --git a/cmd/otel-allocator/target/target.go b/cmd/otel-allocator/target/target.go index 3341560329..5a157bc11d 100644 --- a/cmd/otel-allocator/target/target.go +++ b/cmd/otel-allocator/target/target.go @@ -15,36 +15,30 @@ package target import ( - "fmt" - "net/url" + "strconv" - "github.com/prometheus/common/model" + "github.com/prometheus/prometheus/model/labels" ) // nodeLabels are labels that are used to identify the node on which the given // target is residing. To learn more about these labels, please refer to: // https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config var ( - nodeLabels = []model.LabelName{ + nodeLabels = []string{ "__meta_kubernetes_pod_node_name", "__meta_kubernetes_node_name", "__meta_kubernetes_endpoint_node_name", } - endpointSliceTargetKindLabel model.LabelName = "__meta_kubernetes_endpointslice_address_target_kind" - endpointSliceTargetNameLabel model.LabelName = "__meta_kubernetes_endpointslice_address_target_name" + endpointSliceTargetKindLabel = "__meta_kubernetes_endpointslice_address_target_kind" + endpointSliceTargetNameLabel = "__meta_kubernetes_endpointslice_address_target_name" + relevantLabelNames = append(nodeLabels, endpointSliceTargetKindLabel, endpointSliceTargetNameLabel) ) -// LinkJSON This package contains common structs and methods that relate to scrape targets. -type LinkJSON struct { - Link string `json:"_link"` -} - type Item struct { - JobName string `json:"-"` - Link LinkJSON `json:"-"` - TargetURL []string `json:"targets"` - Labels model.LabelSet `json:"labels"` - CollectorName string `json:"-"` + JobName string + TargetURL string + Labels labels.Labels + CollectorName string hash string } @@ -53,30 +47,30 @@ func (t *Item) Hash() string { } func (t *Item) GetNodeName() string { + relevantLabels := t.Labels.MatchLabels(true, relevantLabelNames...) for _, label := range nodeLabels { - if val, ok := t.Labels[label]; ok { - return string(val) + if val := relevantLabels.Get(label); val != "" { + return val } } - if val := t.Labels[endpointSliceTargetKindLabel]; val != "Node" { + if val := relevantLabels.Get(endpointSliceTargetKindLabel); val != "Node" { return "" } - return string(t.Labels[endpointSliceTargetNameLabel]) + return relevantLabels.Get(endpointSliceTargetNameLabel) } // NewItem Creates a new target item. // INVARIANTS: // * Item fields must not be modified after creation. // * Item should only be made via its constructor, never directly. -func NewItem(jobName string, targetURL string, label model.LabelSet, collectorName string) *Item { +func NewItem(jobName string, targetURL string, labels labels.Labels, collectorName string) *Item { return &Item{ JobName: jobName, - Link: LinkJSON{Link: fmt.Sprintf("/jobs/%s/targets", url.QueryEscape(jobName))}, - hash: jobName + targetURL + label.Fingerprint().String(), - TargetURL: []string{targetURL}, - Labels: label, + hash: jobName + targetURL + strconv.FormatUint(labels.Hash(), 10), + TargetURL: targetURL, + Labels: labels, CollectorName: collectorName, } } diff --git a/cmd/otel-allocator/watcher/promOperator.go b/cmd/otel-allocator/watcher/promOperator.go index ae2ddcb68e..517f065ff3 100644 --- a/cmd/otel-allocator/watcher/promOperator.go +++ b/cmd/otel-allocator/watcher/promOperator.go @@ -22,7 +22,7 @@ import ( "time" "github.com/blang/semver/v4" - "github.com/go-kit/log" + gokitlog "github.com/go-kit/log" "github.com/go-kit/log/level" "github.com/go-logr/logr" monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" @@ -53,6 +53,9 @@ const ( ) func NewPrometheusCRWatcher(ctx context.Context, logger logr.Logger, cfg allocatorconfig.Config) (*PrometheusCRWatcher, error) { + // TODO: Remove this after go 1.23 upgrade + promLogger := level.NewFilter(gokitlog.NewLogfmtLogger(os.Stderr), level.AllowWarn()) + slogger := slog.New(logr.ToSlogHandler(logger)) var resourceSelector *prometheus.ResourceSelector mClient, err := monitoringclient.NewForConfig(cfg.ClusterConfig) if err != nil { @@ -79,18 +82,20 @@ func NewPrometheusCRWatcher(ctx context.Context, logger logr.Logger, cfg allocat Spec: monitoringv1.PrometheusSpec{ CommonPrometheusFields: monitoringv1.CommonPrometheusFields{ ScrapeInterval: monitoringv1.Duration(cfg.PrometheusCR.ScrapeInterval.String()), - ServiceMonitorSelector: cfg.PrometheusCR.ServiceMonitorSelector, PodMonitorSelector: cfg.PrometheusCR.PodMonitorSelector, - ServiceMonitorNamespaceSelector: cfg.PrometheusCR.ServiceMonitorNamespaceSelector, PodMonitorNamespaceSelector: cfg.PrometheusCR.PodMonitorNamespaceSelector, + ServiceMonitorSelector: cfg.PrometheusCR.ServiceMonitorSelector, + ServiceMonitorNamespaceSelector: cfg.PrometheusCR.ServiceMonitorNamespaceSelector, + ScrapeConfigSelector: cfg.PrometheusCR.ScrapeConfigSelector, + ScrapeConfigNamespaceSelector: cfg.PrometheusCR.ScrapeConfigNamespaceSelector, + ProbeSelector: cfg.PrometheusCR.ProbeSelector, + ProbeNamespaceSelector: cfg.PrometheusCR.ProbeNamespaceSelector, ServiceDiscoveryRole: &serviceDiscoveryRole, }, }, } - promOperatorLogger := level.NewFilter(log.NewLogfmtLogger(os.Stderr), level.AllowWarn()) - promOperatorSlogLogger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{Level: slog.LevelWarn})) - generator, err := prometheus.NewConfigGenerator(promOperatorLogger, prom, true) + generator, err := prometheus.NewConfigGenerator(promLogger, prom, true) if err != nil { return nil, err @@ -108,7 +113,7 @@ func NewPrometheusCRWatcher(ctx context.Context, logger logr.Logger, cfg allocat logger.Error(err, "Retrying namespace informer creation in promOperator CRD watcher") return true }, func() error { - nsMonInf, err = getNamespaceInformer(ctx, map[string]struct{}{v1.NamespaceAll: {}}, promOperatorLogger, clientset, operatorMetrics) + nsMonInf, err = getNamespaceInformer(ctx, map[string]struct{}{v1.NamespaceAll: {}}, promLogger, clientset, operatorMetrics) return err }) if getNamespaceInformerErr != nil { @@ -116,13 +121,13 @@ func NewPrometheusCRWatcher(ctx context.Context, logger logr.Logger, cfg allocat return nil, getNamespaceInformerErr } - resourceSelector, err = prometheus.NewResourceSelector(promOperatorSlogLogger, prom, store, nsMonInf, operatorMetrics, eventRecorder) + resourceSelector, err = prometheus.NewResourceSelector(slogger, prom, store, nsMonInf, operatorMetrics, eventRecorder) if err != nil { logger.Error(err, "Failed to create resource selector in promOperator CRD watcher") } return &PrometheusCRWatcher{ - logger: logger, + logger: slogger, kubeMonitoringClient: mClient, k8sClient: clientset, informers: monitoringInformers, @@ -133,13 +138,15 @@ func NewPrometheusCRWatcher(ctx context.Context, logger logr.Logger, cfg allocat kubeConfigPath: cfg.KubeConfigFilePath, podMonitorNamespaceSelector: cfg.PrometheusCR.PodMonitorNamespaceSelector, serviceMonitorNamespaceSelector: cfg.PrometheusCR.ServiceMonitorNamespaceSelector, + scrapeConfigNamespaceSelector: cfg.PrometheusCR.ScrapeConfigNamespaceSelector, + probeNamespaceSelector: cfg.PrometheusCR.ProbeNamespaceSelector, resourceSelector: resourceSelector, store: store, }, nil } type PrometheusCRWatcher struct { - logger logr.Logger + logger *slog.Logger kubeMonitoringClient monitoringclient.Interface k8sClient kubernetes.Interface informers map[string]*informers.ForResource @@ -150,12 +157,13 @@ type PrometheusCRWatcher struct { kubeConfigPath string podMonitorNamespaceSelector *metav1.LabelSelector serviceMonitorNamespaceSelector *metav1.LabelSelector + scrapeConfigNamespaceSelector *metav1.LabelSelector + probeNamespaceSelector *metav1.LabelSelector resourceSelector *prometheus.ResourceSelector store *assets.StoreBuilder } -func getNamespaceInformer(ctx context.Context, allowList map[string]struct{}, promOperatorLogger log.Logger, clientset kubernetes.Interface, operatorMetrics *operator.Metrics) (cache.SharedIndexInformer, error) { - +func getNamespaceInformer(ctx context.Context, allowList map[string]struct{}, promOperatorLogger gokitlog.Logger, clientset kubernetes.Interface, operatorMetrics *operator.Metrics) (cache.SharedIndexInformer, error) { kubernetesVersion, err := clientset.Discovery().ServerVersion() if err != nil { return nil, err @@ -196,9 +204,21 @@ func getInformers(factory informers.FactoriesForNamespaces) (map[string]*informe return nil, err } + probeInformers, err := informers.NewInformersForResource(factory, monitoringv1.SchemeGroupVersion.WithResource(monitoringv1.ProbeName)) + if err != nil { + return nil, err + } + + scrapeConfigInformers, err := informers.NewInformersForResource(factory, promv1alpha1.SchemeGroupVersion.WithResource(promv1alpha1.ScrapeConfigName)) + if err != nil { + return nil, err + } + return map[string]*informers.ForResource{ monitoringv1.ServiceMonitorName: serviceMonitorInformers, monitoringv1.PodMonitorName: podMonitorInformers, + monitoringv1.ProbeName: probeInformers, + promv1alpha1.ScrapeConfigName: scrapeConfigInformers, }, nil } @@ -210,7 +230,7 @@ func (w *PrometheusCRWatcher) Watch(upstreamEvents chan Event, upstreamErrors ch if w.nsInformer != nil { go w.nsInformer.Run(w.stopChannel) - if ok := cache.WaitForNamedCacheSync("namespace", w.stopChannel, w.nsInformer.HasSynced); !ok { + if ok := w.WaitForNamedCacheSync("namespace", w.nsInformer.HasSynced); !ok { success = false } @@ -228,10 +248,12 @@ func (w *PrometheusCRWatcher) Watch(upstreamEvents chan Event, upstreamErrors ch for name, selector := range map[string]*metav1.LabelSelector{ "PodMonitorNamespaceSelector": w.podMonitorNamespaceSelector, "ServiceMonitorNamespaceSelector": w.serviceMonitorNamespaceSelector, + "ProbeNamespaceSelector": w.probeNamespaceSelector, + "ScrapeConfigNamespaceSelector": w.scrapeConfigNamespaceSelector, } { sync, err := k8sutil.LabelSelectionHasChanged(old.Labels, cur.Labels, selector) if err != nil { - w.logger.Error(err, "Failed to check label selection between namespaces while handling namespace updates", "selector", name) + w.logger.Error("Failed to check label selection between namespaces while handling namespace updates", "selector", name, "error", err) return } @@ -252,8 +274,9 @@ func (w *PrometheusCRWatcher) Watch(upstreamEvents chan Event, upstreamErrors ch for name, resource := range w.informers { resource.Start(w.stopChannel) - if ok := cache.WaitForNamedCacheSync(name, w.stopChannel, resource.HasSynced); !ok { - success = false + if ok := w.WaitForNamedCacheSync(name, resource.HasSynced); !ok { + w.logger.Info("skipping informer", "informer", name) + continue } // only send an event notification if there isn't one already @@ -342,6 +365,16 @@ func (w *PrometheusCRWatcher) LoadConfig(ctx context.Context) (*promconfig.Confi return nil, err } + probeInstances, err := w.resourceSelector.SelectProbes(ctx, w.informers[monitoringv1.ProbeName].ListAllByNamespace) + if err != nil { + return nil, err + } + + scrapeConfigInstances, err := w.resourceSelector.SelectScrapeConfigs(ctx, w.informers[promv1alpha1.ScrapeConfigName].ListAllByNamespace) + if err != nil { + return nil, err + } + generatedConfig, err := w.configGenerator.GenerateServerConfiguration( "30s", "", @@ -352,8 +385,8 @@ func (w *PrometheusCRWatcher) LoadConfig(ctx context.Context) (*promconfig.Confi nil, serviceMonitorInstances, podMonitorInstances, - map[string]*monitoringv1.Probe{}, - map[string]*promv1alpha1.ScrapeConfig{}, + probeInstances, + scrapeConfigInstances, w.store, nil, nil, @@ -384,3 +417,41 @@ func (w *PrometheusCRWatcher) LoadConfig(ctx context.Context) (*promconfig.Confi return promCfg, nil } } + +// WaitForNamedCacheSync adds a timeout to the informer's wait for the cache to be ready. +// If the PrometheusCRWatcher is unable to load an informer within 15 seconds, the method is +// cancelled and returns false. A successful informer load will return true. This method also +// will be cancelled if the target allocator's stopChannel is called before it returns. +// +// This method is inspired by the upstream prometheus-operator implementation, with a shorter timeout +// and support for the PrometheusCRWatcher's stopChannel. +// https://github.com/prometheus-operator/prometheus-operator/blob/293c16c854ce69d1da9fdc8f0705de2d67bfdbfa/pkg/operator/operator.go#L433 +func (w *PrometheusCRWatcher) WaitForNamedCacheSync(controllerName string, inf cache.InformerSynced) bool { + ctx, cancel := context.WithTimeout(context.Background(), time.Second*15) + t := time.NewTicker(time.Second * 5) + defer t.Stop() + + go func() { + for { + select { + case <-t.C: + w.logger.Debug("cache sync not yet completed") + case <-ctx.Done(): + return + case <-w.stopChannel: + w.logger.Warn("stop received, shutting down cache syncing") + cancel() + return + } + } + }() + + ok := cache.WaitForNamedCacheSync(controllerName, ctx.Done(), inf) + if !ok { + w.logger.Error("failed to sync cache") + } else { + w.logger.Debug("successfully synced cache") + } + + return ok +} diff --git a/cmd/otel-allocator/watcher/promOperator_test.go b/cmd/otel-allocator/watcher/promOperator_test.go index 7bd3f0f443..3cc959046e 100644 --- a/cmd/otel-allocator/watcher/promOperator_test.go +++ b/cmd/otel-allocator/watcher/promOperator_test.go @@ -24,6 +24,7 @@ import ( "github.com/go-kit/log" "github.com/go-kit/log/level" monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" + promv1alpha1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1alpha1" "github.com/prometheus-operator/prometheus-operator/pkg/assets" fakemonitoringclient "github.com/prometheus-operator/prometheus-operator/pkg/client/versioned/fake" "github.com/prometheus-operator/prometheus-operator/pkg/informers" @@ -35,6 +36,7 @@ import ( promconfig "github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/discovery" kubeDiscovery "github.com/prometheus/prometheus/discovery/kubernetes" + "github.com/prometheus/prometheus/discovery/targetgroup" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" v1 "k8s.io/api/core/v1" @@ -59,6 +61,8 @@ func TestLoadConfig(t *testing.T) { name string serviceMonitors []*monitoringv1.ServiceMonitor podMonitors []*monitoringv1.PodMonitor + scrapeConfigs []*promv1alpha1.ScrapeConfig + probes []*monitoringv1.Probe want *promconfig.Config wantErr bool cfg allocatorconfig.Config @@ -662,6 +666,136 @@ func TestLoadConfig(t *testing.T) { }, }, }, + { + name: "scrape configs selector test", + scrapeConfigs: []*promv1alpha1.ScrapeConfig{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "scrapeconfig-test-1", + Namespace: "test", + Labels: map[string]string{ + "testpod": "testpod", + }, + }, + Spec: promv1alpha1.ScrapeConfigSpec{ + JobName: func() *string { + j := "scrapeConfig/test/scrapeconfig-test-1" + return &j + }(), + StaticConfigs: []promv1alpha1.StaticConfig{ + { + Targets: []promv1alpha1.Target{"127.0.0.1:8888"}, + Labels: nil, + }, + }, + }, + }, + }, + cfg: allocatorconfig.Config{ + PrometheusCR: allocatorconfig.PrometheusCRConfig{ + ScrapeConfigSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "testpod": "testpod", + }, + }, + }, + }, + want: &promconfig.Config{ + ScrapeConfigs: []*promconfig.ScrapeConfig{ + { + JobName: "scrapeConfig/test/scrapeconfig-test-1", + ScrapeInterval: model.Duration(30 * time.Second), + ScrapeProtocols: defaultScrapeProtocols, + ScrapeTimeout: model.Duration(10 * time.Second), + HonorTimestamps: true, + HonorLabels: false, + Scheme: "http", + MetricsPath: "/metrics", + ServiceDiscoveryConfigs: []discovery.Config{ + discovery.StaticConfig{ + &targetgroup.Group{ + Targets: []model.LabelSet{ + map[model.LabelName]model.LabelValue{ + "__address__": "127.0.0.1:8888", + }, + }, + Labels: map[model.LabelName]model.LabelValue{}, + Source: "0", + }, + }, + }, + HTTPClientConfig: config.DefaultHTTPClientConfig, + EnableCompression: true, + }, + }, + }, + }, + { + name: "probe selector test", + probes: []*monitoringv1.Probe{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "probe-test-1", + Namespace: "test", + Labels: map[string]string{ + "testpod": "testpod", + }, + }, + Spec: monitoringv1.ProbeSpec{ + JobName: "probe/test/probe-1/0", + ProberSpec: monitoringv1.ProberSpec{ + URL: "localhost:50671", + Path: "/metrics", + }, + Targets: monitoringv1.ProbeTargets{ + StaticConfig: &monitoringv1.ProbeTargetStaticConfig{ + Targets: []string{"prometheus.io"}, + }, + }, + }, + }, + }, + cfg: allocatorconfig.Config{ + PrometheusCR: allocatorconfig.PrometheusCRConfig{ + ProbeSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "testpod": "testpod", + }, + }, + }, + }, + want: &promconfig.Config{ + ScrapeConfigs: []*promconfig.ScrapeConfig{ + { + JobName: "probe/test/probe-test-1", + ScrapeInterval: model.Duration(30 * time.Second), + ScrapeProtocols: defaultScrapeProtocols, + ScrapeTimeout: model.Duration(10 * time.Second), + HonorTimestamps: true, + HonorLabels: false, + Scheme: "http", + MetricsPath: "/metrics", + ServiceDiscoveryConfigs: []discovery.Config{ + discovery.StaticConfig{ + &targetgroup.Group{ + Targets: []model.LabelSet{ + map[model.LabelName]model.LabelValue{ + "__address__": "prometheus.io", + }, + }, + Labels: map[model.LabelName]model.LabelValue{ + "namespace": "test", + }, + Source: "0", + }, + }, + }, + HTTPClientConfig: config.DefaultHTTPClientConfig, + EnableCompression: true, + }, + }, + }, + }, { name: "service monitor namespace selector test", serviceMonitors: []*monitoringv1.ServiceMonitor{ @@ -805,7 +939,7 @@ func TestLoadConfig(t *testing.T) { } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { - w, _ := getTestPrometheusCRWatcher(t, tt.serviceMonitors, tt.podMonitors, tt.cfg) + w, _ := getTestPrometheusCRWatcher(t, tt.serviceMonitors, tt.podMonitors, tt.probes, tt.scrapeConfigs, tt.cfg) // Start namespace informers in order to populate cache. go w.nsInformer.Run(w.stopChannel) @@ -910,7 +1044,7 @@ func TestNamespaceLabelUpdate(t *testing.T) { ScrapeConfigs: []*promconfig.ScrapeConfig{}, } - w, source := getTestPrometheusCRWatcher(t, nil, podMonitors, cfg) + w, source := getTestPrometheusCRWatcher(t, nil, podMonitors, nil, nil, cfg) events := make(chan Event, 1) eventInterval := 5 * time.Millisecond @@ -946,7 +1080,7 @@ func TestNamespaceLabelUpdate(t *testing.T) { select { case <-events: - case <-time.After(time.Second): + case <-time.After(5 * time.Second): } got, err = w.LoadConfig(context.Background()) @@ -973,10 +1107,10 @@ func TestRateLimit(t *testing.T) { }, } events := make(chan Event, 1) - eventInterval := 5 * time.Millisecond + eventInterval := 500 * time.Millisecond cfg := allocatorconfig.Config{} - w, _ := getTestPrometheusCRWatcher(t, nil, nil, cfg) + w, _ := getTestPrometheusCRWatcher(t, nil, nil, nil, nil, cfg) defer w.Close() w.eventInterval = eventInterval @@ -1006,10 +1140,10 @@ func TestRateLimit(t *testing.T) { default: return false } - }, eventInterval*2, time.Millisecond) + }, time.Second*5, eventInterval/10) // it's difficult to measure the rate precisely - // what we do, is send two updates, and then assert that the elapsed time is between eventInterval and 3*eventInterval + // what we do, is send two updates, and then assert that the elapsed time is at least eventInterval startTime := time.Now() _, err = w.kubeMonitoringClient.MonitoringV1().ServiceMonitors("test").Update(context.Background(), serviceMonitor, metav1.UpdateOptions{}) require.NoError(t, err) @@ -1020,7 +1154,7 @@ func TestRateLimit(t *testing.T) { default: return false } - }, eventInterval*2, time.Millisecond) + }, time.Second*5, eventInterval/10) _, err = w.kubeMonitoringClient.MonitoringV1().ServiceMonitors("test").Update(context.Background(), serviceMonitor, metav1.UpdateOptions{}) require.NoError(t, err) require.Eventually(t, func() bool { @@ -1030,16 +1164,14 @@ func TestRateLimit(t *testing.T) { default: return false } - }, eventInterval*2, time.Millisecond) + }, time.Second*5, eventInterval/10) elapsedTime := time.Since(startTime) assert.Less(t, eventInterval, elapsedTime) - assert.GreaterOrEqual(t, eventInterval*3, elapsedTime) - } // getTestPrometheusCRWatcher creates a test instance of PrometheusCRWatcher with fake clients // and test secrets. -func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.ServiceMonitor, podMonitors []*monitoringv1.PodMonitor, cfg allocatorconfig.Config) (*PrometheusCRWatcher, *fcache.FakeControllerSource) { +func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.ServiceMonitor, podMonitors []*monitoringv1.PodMonitor, probes []*monitoringv1.Probe, scrapeConfigs []*promv1alpha1.ScrapeConfig, cfg allocatorconfig.Config) (*PrometheusCRWatcher, *fcache.FakeControllerSource) { mClient := fakemonitoringclient.NewSimpleClientset() for _, sm := range svcMonitors { if sm != nil { @@ -1057,6 +1189,23 @@ func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.Servic } } } + for _, prb := range probes { + if prb != nil { + _, err := mClient.MonitoringV1().Probes(prb.Namespace).Create(context.Background(), prb, metav1.CreateOptions{}) + if err != nil { + t.Fatal(t, err) + } + } + } + + for _, scc := range scrapeConfigs { + if scc != nil { + _, err := mClient.MonitoringV1alpha1().ScrapeConfigs(scc.Namespace).Create(context.Background(), scc, metav1.CreateOptions{}) + if err != nil { + t.Fatal(t, err) + } + } + } k8sClient := fake.NewSimpleClientset() _, err := k8sClient.CoreV1().Secrets("test").Create(context.Background(), &v1.Secret{ @@ -1096,6 +1245,10 @@ func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.Servic PodMonitorSelector: cfg.PrometheusCR.PodMonitorSelector, ServiceMonitorNamespaceSelector: cfg.PrometheusCR.ServiceMonitorNamespaceSelector, PodMonitorNamespaceSelector: cfg.PrometheusCR.PodMonitorNamespaceSelector, + ProbeSelector: cfg.PrometheusCR.ProbeSelector, + ProbeNamespaceSelector: cfg.PrometheusCR.ProbeNamespaceSelector, + ScrapeConfigSelector: cfg.PrometheusCR.ScrapeConfigSelector, + ScrapeConfigNamespaceSelector: cfg.PrometheusCR.ScrapeConfigNamespaceSelector, ServiceDiscoveryRole: &serviceDiscoveryRole, }, }, @@ -1130,6 +1283,7 @@ func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.Servic require.NoError(t, err) return &PrometheusCRWatcher{ + logger: slog.Default(), kubeMonitoringClient: mClient, k8sClient: k8sClient, informers: informers, @@ -1138,6 +1292,8 @@ func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.Servic configGenerator: generator, podMonitorNamespaceSelector: cfg.PrometheusCR.PodMonitorNamespaceSelector, serviceMonitorNamespaceSelector: cfg.PrometheusCR.ServiceMonitorNamespaceSelector, + probeNamespaceSelector: cfg.PrometheusCR.ProbeNamespaceSelector, + scrapeConfigNamespaceSelector: cfg.PrometheusCR.ScrapeConfigNamespaceSelector, resourceSelector: resourceSelector, store: store, }, source diff --git a/config/crd/bases/opentelemetry.io_instrumentations.yaml b/config/crd/bases/opentelemetry.io_instrumentations.yaml index 19582f62c6..4032a33613 100644 --- a/config/crd/bases/opentelemetry.io_instrumentations.yaml +++ b/config/crd/bases/opentelemetry.io_instrumentations.yaml @@ -215,6 +215,118 @@ spec: type: object version: type: string + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -330,6 +442,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -407,6 +631,19 @@ spec: properties: endpoint: type: string + tls: + properties: + ca_file: + type: string + cert_file: + type: string + configMapName: + type: string + key_file: + type: string + secretName: + type: string + type: object type: object go: properties: @@ -511,6 +748,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -633,6 +982,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -811,6 +1272,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -921,6 +1494,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer @@ -1044,6 +1729,118 @@ spec: x-kubernetes-int-or-string: true type: object type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + required: + - spec + type: object volumeLimitSize: anyOf: - type: integer diff --git a/config/crd/bases/opentelemetry.io_opentelemetrycollectors.yaml b/config/crd/bases/opentelemetry.io_opentelemetrycollectors.yaml index 05baaaa5df..fc36f4deb5 100644 --- a/config/crd/bases/opentelemetry.io_opentelemetrycollectors.yaml +++ b/config/crd/bases/opentelemetry.io_opentelemetrycollectors.yaml @@ -6949,6 +6949,13 @@ spec: type: boolean type: object type: object + persistentVolumeClaimRetentionPolicy: + properties: + whenDeleted: + type: string + whenScaled: + type: string + type: object podAnnotations: additionalProperties: type: string diff --git a/config/default/kustomization.yaml b/config/default/kustomization.yaml index b5d04b59ae..2475c8ee5b 100644 --- a/config/default/kustomization.yaml +++ b/config/default/kustomization.yaml @@ -18,8 +18,6 @@ bases: - ../manager - ../webhook - ../certmanager -# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. -#- ../prometheus patchesStrategicMerge: # Protect the /metrics endpoint by putting it behind auth. diff --git a/config/default/manager_auth_proxy_patch.yaml b/config/default/manager_auth_proxy_patch.yaml index 9969c5c16e..4ac6ff2247 100644 --- a/config/default/manager_auth_proxy_patch.yaml +++ b/config/default/manager_auth_proxy_patch.yaml @@ -10,7 +10,7 @@ spec: spec: containers: - name: kube-rbac-proxy - image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1 + image: quay.io/brancz/kube-rbac-proxy:v0.13.1 args: - "--secure-listen-address=0.0.0.0:8443" - "--upstream=http://127.0.0.1:8080/" diff --git a/config/manager/kustomization.yaml b/config/manager/kustomization.yaml index 5c5f0b84cb..372a75ae43 100644 --- a/config/manager/kustomization.yaml +++ b/config/manager/kustomization.yaml @@ -1,2 +1,3 @@ resources: - manager.yaml + diff --git a/config/overlays/openshift/kustomization.yaml b/config/overlays/openshift/kustomization.yaml index ddd0d3b29b..dd5b4300d0 100644 --- a/config/overlays/openshift/kustomization.yaml +++ b/config/overlays/openshift/kustomization.yaml @@ -8,3 +8,7 @@ patches: kind: Deployment name: controller-manager path: manager-patch.yaml + +patchesStrategicMerge: +- metrics_service_tls_patch.yaml +- manager_auth_proxy_tls_patch.yaml \ No newline at end of file diff --git a/config/overlays/openshift/manager-patch.yaml b/config/overlays/openshift/manager-patch.yaml index 2fb76bd889..57b097ca29 100644 --- a/config/overlays/openshift/manager-patch.yaml +++ b/config/overlays/openshift/manager-patch.yaml @@ -7,6 +7,6 @@ - --zap-time-encoding=rfc3339nano - --enable-nginx-instrumentation=true - '--enable-go-instrumentation=true' - - '--enable-multi-instrumentation=true' - '--openshift-create-dashboard=true' - '--feature-gates=+operator.observability.prometheus' + - '--enable-cr-metrics=true' \ No newline at end of file diff --git a/config/overlays/openshift/manager_auth_proxy_tls_patch.yaml b/config/overlays/openshift/manager_auth_proxy_tls_patch.yaml new file mode 100644 index 0000000000..077fa74ea6 --- /dev/null +++ b/config/overlays/openshift/manager_auth_proxy_tls_patch.yaml @@ -0,0 +1,29 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: controller-manager + namespace: system +spec: + template: + spec: + containers: + - name: manager # without this line, kustomize reorders the containers, making kube-rbac-proxy the default container + - name: kube-rbac-proxy + args: + - "--secure-listen-address=0.0.0.0:8443" + - "--upstream=http://127.0.0.1:8080/" + - "--logtostderr=true" + - "--v=0" + - "--tls-cert-file=/var/run/tls/server/tls.crt" + - "--tls-private-key-file=/var/run/tls/server/tls.key" + - "--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA256" + - "--tls-min-version=VersionTLS12" + volumeMounts: + - mountPath: /var/run/tls/server + name: opentelemetry-operator-metrics-cert + volumes: + - name: opentelemetry-operator-metrics-cert + secret: + defaultMode: 420 + # secret generated by the 'service.beta.openshift.io/serving-cert-secret-name' annotation on the metrics-service + secretName: opentelemetry-operator-metrics diff --git a/config/overlays/openshift/metrics_service_tls_patch.yaml b/config/overlays/openshift/metrics_service_tls_patch.yaml new file mode 100644 index 0000000000..7505c7894a --- /dev/null +++ b/config/overlays/openshift/metrics_service_tls_patch.yaml @@ -0,0 +1,7 @@ +apiVersion: v1 +kind: Service +metadata: + annotations: + service.beta.openshift.io/serving-cert-secret-name: opentelemetry-operator-metrics + name: controller-manager-metrics-service + namespace: system diff --git a/config/prometheus/kustomization.yaml b/config/prometheus/kustomization.yaml deleted file mode 100644 index ed137168a1..0000000000 --- a/config/prometheus/kustomization.yaml +++ /dev/null @@ -1,2 +0,0 @@ -resources: -- monitor.yaml diff --git a/config/prometheus/monitor.yaml b/config/prometheus/monitor.yaml deleted file mode 100644 index 6e5f438a21..0000000000 --- a/config/prometheus/monitor.yaml +++ /dev/null @@ -1,26 +0,0 @@ - -# Prometheus Monitor Service (Metrics) -apiVersion: monitoring.coreos.com/v1 -kind: ServiceMonitor -metadata: - labels: - app.kubernetes.io/name: opentelemetry-operator - control-plane: controller-manager - name: controller-manager-metrics-monitor - namespace: system -spec: - endpoints: - - path: /metrics - port: https - scheme: https - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token - tlsConfig: - insecureSkipVerify: false - ca: - secret: - key: ca.crt - name: opentelemetry-operator-controller-manager-service-cert - selector: - matchLabels: - app.kubernetes.io/name: opentelemetry-operator - control-plane: controller-manager diff --git a/config/rbac/role.yaml b/config/rbac/role.yaml index 73632f89c8..a03aeb18e8 100644 --- a/config/rbac/role.yaml +++ b/config/rbac/role.yaml @@ -30,7 +30,9 @@ rules: - "" resources: - namespaces + - secrets verbs: + - get - list - watch - apiGroups: @@ -133,6 +135,7 @@ rules: - opentelemetry.io resources: - opampbridges + - targetallocators verbs: - create - delete @@ -153,6 +156,7 @@ rules: - opampbridges/status - opentelemetrycollectors/finalizers - opentelemetrycollectors/status + - targetallocators/status verbs: - get - patch diff --git a/controllers/builder_test.go b/controllers/builder_test.go index e3b495e00a..793bc217e2 100644 --- a/controllers/builder_test.go +++ b/controllers/builder_test.go @@ -15,9 +15,10 @@ package controllers import ( - "strings" "testing" + cmv1 "github.com/cert-manager/cert-manager/pkg/apis/certmanager/v1" + cmmetav1 "github.com/cert-manager/cert-manager/pkg/apis/meta/v1" "github.com/go-logr/logr" monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" "github.com/stretchr/testify/require" @@ -35,10 +36,12 @@ import ( "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/manifests" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" + "github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator" "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) @@ -1199,7 +1202,7 @@ endpoint: ws://opamp-server:4320/v1/opamp } } -func TestBuildTargetAllocator(t *testing.T) { +func TestBuildCollectorTargetAllocatorResources(t *testing.T) { var goodConfigYaml = ` receivers: prometheus: @@ -1241,8 +1244,9 @@ service: name string args args want []client.Object - featuregates []string + featuregates []*colfeaturegate.Gate wantErr bool + opts []config.Option }{ { name: "base case", @@ -2183,33 +2187,2637 @@ prometheus_cr: }, }, }, - wantErr: false, - featuregates: []string{}, + wantErr: false, }, - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - cfg := config.New( - config.WithCollectorImage("default-collector"), - config.WithTargetAllocatorImage("default-ta-allocator"), - ) - params := manifests.Params{ - Log: logr.Discard(), - Config: cfg, - OtelCol: tt.args.instance, - } - targetAllocator, err := collector.TargetAllocator(params) - require.NoError(t, err) - params.TargetAllocator = targetAllocator - if len(tt.featuregates) > 0 { - fg := strings.Join(tt.featuregates, ",") - flagset := featuregate.Flags(colfeaturegate.GlobalRegistry()) - if err = flagset.Set(featuregate.FeatureGatesFlag, fg); err != nil { - t.Errorf("featuregate setting error = %v", err) - return - } - } - got, err := BuildCollector(params) + { + name: "target allocator mtls enabled", + args: args{ + instance: v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "test", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ + Image: "test", + Replicas: &one, + }, + Mode: "statefulset", + Config: goodConfig, + TargetAllocator: v1beta1.TargetAllocatorEmbedded{ + Enabled: true, + FilterStrategy: "relabel-config", + AllocationStrategy: v1beta1.TargetAllocatorAllocationStrategyConsistentHashing, + PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{ + Enabled: true, + }, + }, + }, + }, + }, + want: []client.Object{ + &appsv1.StatefulSet{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + Spec: appsv1.StatefulSetSpec{ + ServiceName: "test-collector", + Replicas: &one, + Selector: &metav1.LabelSelector{ + MatchLabels: selectorLabels, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-operator-config/sha256": "42773025f65feaf30df59a306a9e38f1aaabe94c8310983beaddb7f648d699b0", + "prometheus.io/path": "/metrics", + "prometheus.io/port": "8888", + "prometheus.io/scrape": "true", + }, + }, + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: "otc-internal", + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-collector-" + goodConfigHash, + }, + Items: []corev1.KeyToPath{ + { + Key: "collector.yaml", + Path: "collector.yaml", + }, + }, + }, + }, + }, + { + Name: "test-ta-client-cert", + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: "test-ta-client-cert", + }, + }, + }, + }, + Containers: []corev1.Container{ + { + Name: "otc-container", + Image: "test", + Args: []string{ + "--config=/conf/collector.yaml", + }, + Env: []corev1.EnvVar{ + { + Name: "POD_NAME", + ValueFrom: &corev1.EnvVarSource{ + FieldRef: &corev1.ObjectFieldSelector{ + FieldPath: "metadata.name", + }, + }, + }, + { + Name: "SHARD", + Value: "0", + }, + }, + Ports: []corev1.ContainerPort{ + { + Name: "metrics", + HostPort: 0, + ContainerPort: 8888, + Protocol: "TCP", + }, + }, + VolumeMounts: []corev1.VolumeMount{ + { + Name: "otc-internal", + MountPath: "/conf", + }, + { + Name: "test-ta-client-cert", + MountPath: "/tls", + }, + }, + }, + }, + ShareProcessNamespace: ptr.To(false), + DNSPolicy: "ClusterFirst", + DNSConfig: &corev1.PodDNSConfig{}, + ServiceAccountName: "test-collector", + }, + }, + PodManagementPolicy: "Parallel", + }, + }, + &policyV1.PodDisruptionBudget{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + Spec: policyV1.PodDisruptionBudgetSpec{ + Selector: &v1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + MaxUnavailable: &intstr.IntOrString{ + Type: intstr.Int, + IntVal: 1, + }, + }, + }, + &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector-" + goodConfigHash, + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + Data: map[string]string{ + "collector.yaml": "exporters:\n debug: null\nreceivers:\n prometheus:\n config: {}\n target_allocator:\n collector_id: ${POD_NAME}\n endpoint: https://test-targetallocator:443\n interval: 30s\n tls:\n ca_file: /tls/ca.crt\n cert_file: /tls/tls.crt\n key_file: /tls/tls.key\nservice:\n pipelines:\n metrics:\n exporters:\n - debug\n receivers:\n - prometheus\n", + }, + }, + &corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + }, + &corev1.Service{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector-monitoring", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector-monitoring", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + "operator.opentelemetry.io/collector-service-type": "monitoring", + "operator.opentelemetry.io/collector-monitoring-service": "Exists", + }, + Annotations: map[string]string{}, + }, + Spec: corev1.ServiceSpec{ + Ports: []corev1.ServicePort{ + { + Name: "monitoring", + Port: 8888, + }, + }, + Selector: selectorLabels, + }, + }, + &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Data: map[string]string{ + "targetallocator.yaml": `allocation_strategy: consistent-hashing +collector_selector: + matchlabels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: test.test + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/part-of: opentelemetry + matchexpressions: [] +config: + scrape_configs: + - job_name: example + metric_relabel_configs: + - replacement: $1_$2 + source_labels: + - job + target_label: job + relabel_configs: + - replacement: my_service_$1 + source_labels: + - __meta_service_id + target_label: job + - replacement: $1 + source_labels: + - __meta_service_name + target_label: instance +filter_strategy: relabel-config +https: + ca_file_path: /tls/ca.crt + enabled: true + listen_addr: :8443 + tls_cert_file_path: /tls/tls.crt + tls_key_file_path: /tls/tls.key +prometheus_cr: + enabled: true + pod_monitor_selector: null + service_monitor_selector: null +`, + }, + }, + &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: taSelectorLabels, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-targetallocator-config/hash": "f1ce0fdbf69924576576d1d6eb2a3cc91a3f72675b3facbb36702d57027bc6ae", + }, + }, + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: "ta-internal", + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-targetallocator", + }, + Items: []corev1.KeyToPath{ + { + Key: "targetallocator.yaml", + Path: "targetallocator.yaml", + }, + }, + }, + }, + }, + { + Name: "test-ta-server-cert", + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: "test-ta-server-cert", + }, + }, + }, + }, + Containers: []corev1.Container{ + { + Name: "ta-container", + Image: "default-ta-allocator", + Env: []corev1.EnvVar{ + { + Name: "OTELCOL_NAMESPACE", + ValueFrom: &corev1.EnvVarSource{ + FieldRef: &corev1.ObjectFieldSelector{ + FieldPath: "metadata.namespace", + }, + }, + }, + }, + Ports: []corev1.ContainerPort{ + { + Name: "http", + HostPort: 0, + ContainerPort: 8080, + Protocol: "TCP", + }, + { + Name: "https", + HostPort: 0, + ContainerPort: 8443, + Protocol: "TCP", + }, + }, + VolumeMounts: []corev1.VolumeMount{ + { + Name: "ta-internal", + MountPath: "/conf", + }, + { + Name: "test-ta-server-cert", + MountPath: "/tls", + }, + }, + LivenessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/livez", + Port: intstr.FromInt(8080), + }, + }, + }, + ReadinessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/readyz", + Port: intstr.FromInt(8080), + }, + }, + }, + }, + }, + DNSPolicy: "ClusterFirst", + DNSConfig: &corev1.PodDNSConfig{}, + ShareProcessNamespace: ptr.To(false), + ServiceAccountName: "test-targetallocator", + }, + }, + }, + }, + &corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + }, + &corev1.Service{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Spec: corev1.ServiceSpec{ + Ports: []corev1.ServicePort{ + { + Name: "targetallocation", + Port: 80, + TargetPort: intstr.IntOrString{ + Type: 1, + StrVal: "http", + }, + }, + { + Name: "targetallocation-https", + Port: 443, + TargetPort: intstr.IntOrString{ + Type: 1, + StrVal: "https", + }, + }, + }, + Selector: taSelectorLabels, + }, + }, + &policyV1.PodDisruptionBudget{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-targetallocator-config/hash": "f1ce0fdbf69924576576d1d6eb2a3cc91a3f72675b3facbb36702d57027bc6ae", + }, + }, + Spec: policyV1.PodDisruptionBudgetSpec{ + Selector: &v1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + MaxUnavailable: &intstr.IntOrString{ + Type: intstr.Int, + IntVal: 1, + }, + }, + }, + &cmv1.Issuer{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-self-signed-issuer", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-self-signed-issuer", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + Spec: cmv1.IssuerSpec{ + IssuerConfig: cmv1.IssuerConfig{ + SelfSigned: &cmv1.SelfSignedIssuer{ + CRLDistributionPoints: nil, + }, + }, + }, + }, + &cmv1.Certificate{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-ca-cert", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-ca-cert", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + Spec: cmv1.CertificateSpec{ + Subject: &cmv1.X509Subject{ + OrganizationalUnits: []string{"opentelemetry-operator"}, + }, + CommonName: "test-ca-cert", + IsCA: true, + SecretName: "test-ca-cert", + IssuerRef: cmmetav1.ObjectReference{ + Name: "test-self-signed-issuer", + Kind: "Issuer", + }, + }, + }, + &cmv1.Issuer{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-ca-issuer", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-ca-issuer", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + Spec: cmv1.IssuerSpec{ + IssuerConfig: cmv1.IssuerConfig{ + CA: &cmv1.CAIssuer{ + SecretName: "test-ca-cert", + }, + }, + }, + }, + &cmv1.Certificate{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-ta-server-cert", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-ta-server-cert", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + Spec: cmv1.CertificateSpec{ + Subject: &cmv1.X509Subject{ + OrganizationalUnits: []string{"opentelemetry-operator"}, + }, + DNSNames: []string{ + "test-targetallocator", + "test-targetallocator.test.svc", + "test-targetallocator.test.svc.cluster.local", + }, + SecretName: "test-ta-server-cert", + IssuerRef: cmmetav1.ObjectReference{ + Name: "test-ca-issuer", + Kind: "Issuer", + }, + Usages: []cmv1.KeyUsage{ + "client auth", + "server auth", + }, + }, + }, + &cmv1.Certificate{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-ta-client-cert", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-ta-client-cert", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + Spec: cmv1.CertificateSpec{ + Subject: &cmv1.X509Subject{ + OrganizationalUnits: []string{"opentelemetry-operator"}, + }, + DNSNames: []string{ + "test-targetallocator", + "test-targetallocator.test.svc", + "test-targetallocator.test.svc.cluster.local", + }, + SecretName: "test-ta-client-cert", + IssuerRef: cmmetav1.ObjectReference{ + Name: "test-ca-issuer", + Kind: "Issuer", + }, + Usages: []cmv1.KeyUsage{ + "client auth", + "server auth", + }, + }, + }, + }, + wantErr: false, + opts: []config.Option{ + config.WithCertManagerAvailability(certmanager.Available), + }, + featuregates: []*colfeaturegate.Gate{featuregate.EnableTargetAllocatorMTLS}, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + opts := []config.Option{ + config.WithCollectorImage("default-collector"), + config.WithTargetAllocatorImage("default-ta-allocator"), + } + opts = append(opts, tt.opts...) + cfg := config.New( + opts..., + ) + params := manifests.Params{ + Log: logr.Discard(), + Config: cfg, + OtelCol: tt.args.instance, + } + targetAllocator, err := collector.TargetAllocator(params) + require.NoError(t, err) + params.TargetAllocator = targetAllocator + registry := colfeaturegate.GlobalRegistry() + for _, gate := range tt.featuregates { + current := gate.IsEnabled() + require.False(t, current, "only enable gates which are disabled by default") + if setErr := registry.Set(gate.ID(), true); setErr != nil { + require.NoError(t, setErr) + return + } + t.Cleanup(func() { + setErr := registry.Set(gate.ID(), current) + require.NoError(t, setErr) + }) + } + got, err := BuildCollector(params) + if (err != nil) != tt.wantErr { + t.Errorf("BuildAll() error = %v, wantErr %v", err, tt.wantErr) + return + } + require.Equal(t, tt.want, got) + + }) + } +} + +func TestBuildCollectorTargetAllocatorCR(t *testing.T) { + var goodConfigYaml = ` +receivers: + prometheus: + config: + scrape_configs: + - job_name: 'example' + relabel_configs: + - source_labels: ['__meta_service_id'] + target_label: 'job' + replacement: 'my_service_$$1' + - source_labels: ['__meta_service_name'] + target_label: 'instance' + replacement: '$1' + metric_relabel_configs: + - source_labels: ['job'] + target_label: 'job' + replacement: '$$1_$2' +exporters: + debug: +service: + pipelines: + metrics: + receivers: [prometheus] + exporters: [debug] +` + + goodConfig := v1beta1.Config{} + err := go_yaml.Unmarshal([]byte(goodConfigYaml), &goodConfig) + require.NoError(t, err) + + goodConfigHash, _ := manifestutils.GetConfigMapSHA(goodConfig) + goodConfigHash = goodConfigHash[:8] + + one := int32(1) + type args struct { + instance v1beta1.OpenTelemetryCollector + } + tests := []struct { + name string + args args + want []client.Object + featuregates []*colfeaturegate.Gate + wantErr bool + opts []config.Option + }{ + { + name: "base case", + args: args{ + instance: v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "test", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ + Image: "test", + Replicas: &one, + }, + Mode: "statefulset", + Config: goodConfig, + TargetAllocator: v1beta1.TargetAllocatorEmbedded{ + Enabled: true, + FilterStrategy: "relabel-config", + AllocationStrategy: v1beta1.TargetAllocatorAllocationStrategyConsistentHashing, + PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{ + Enabled: true, + }, + }, + }, + }, + }, + want: []client.Object{ + &appsv1.StatefulSet{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + Spec: appsv1.StatefulSetSpec{ + ServiceName: "test-collector", + Replicas: &one, + Selector: &metav1.LabelSelector{ + MatchLabels: selectorLabels, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-operator-config/sha256": "42773025f65feaf30df59a306a9e38f1aaabe94c8310983beaddb7f648d699b0", + "prometheus.io/path": "/metrics", + "prometheus.io/port": "8888", + "prometheus.io/scrape": "true", + }, + }, + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: "otc-internal", + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-collector-" + goodConfigHash, + }, + Items: []corev1.KeyToPath{ + { + Key: "collector.yaml", + Path: "collector.yaml", + }, + }, + }, + }, + }, + }, + Containers: []corev1.Container{ + { + Name: "otc-container", + Image: "test", + Args: []string{ + "--config=/conf/collector.yaml", + }, + Env: []corev1.EnvVar{ + { + Name: "POD_NAME", + ValueFrom: &corev1.EnvVarSource{ + FieldRef: &corev1.ObjectFieldSelector{ + FieldPath: "metadata.name", + }, + }, + }, + { + Name: "SHARD", + Value: "0", + }, + }, + Ports: []corev1.ContainerPort{ + { + Name: "metrics", + HostPort: 0, + ContainerPort: 8888, + Protocol: "TCP", + }, + }, + VolumeMounts: []corev1.VolumeMount{ + { + Name: "otc-internal", + MountPath: "/conf", + }, + }, + }, + }, + ShareProcessNamespace: ptr.To(false), + DNSPolicy: "ClusterFirst", + DNSConfig: &corev1.PodDNSConfig{}, + ServiceAccountName: "test-collector", + }, + }, + PodManagementPolicy: "Parallel", + }, + }, + &policyV1.PodDisruptionBudget{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + Spec: policyV1.PodDisruptionBudgetSpec{ + Selector: &v1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + MaxUnavailable: &intstr.IntOrString{ + Type: intstr.Int, + IntVal: 1, + }, + }, + }, + &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector-" + goodConfigHash, + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + Data: map[string]string{ + "collector.yaml": "exporters:\n debug: null\nreceivers:\n prometheus:\n config: {}\n target_allocator:\n collector_id: ${POD_NAME}\n endpoint: http://test-targetallocator:80\n interval: 30s\nservice:\n pipelines:\n metrics:\n exporters:\n - debug\n receivers:\n - prometheus\n", + }, + }, + &corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + }, + &corev1.Service{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector-monitoring", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector-monitoring", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + "operator.opentelemetry.io/collector-service-type": "monitoring", + "operator.opentelemetry.io/collector-monitoring-service": "Exists", + }, + Annotations: map[string]string{}, + }, + Spec: corev1.ServiceSpec{ + Ports: []corev1.ServicePort{ + { + Name: "monitoring", + Port: 8888, + }, + }, + Selector: selectorLabels, + }, + }, + &v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "test", + Labels: nil, + }, + Spec: v1alpha1.TargetAllocatorSpec{ + FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig, + AllocationStrategy: v1beta1.TargetAllocatorAllocationStrategyConsistentHashing, + PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{ + Enabled: true, + }, + }, + }, + }, + wantErr: false, + }, + { + name: "enable metrics case", + args: args{ + instance: v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "test", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ + Image: "test", + Replicas: &one, + }, + Mode: "statefulset", + Config: goodConfig, + TargetAllocator: v1beta1.TargetAllocatorEmbedded{ + Enabled: true, + PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{ + Enabled: true, + }, + FilterStrategy: "relabel-config", + Observability: v1beta1.ObservabilitySpec{ + Metrics: v1beta1.MetricsConfigSpec{ + EnableMetrics: true, + }, + }, + }, + }, + }, + }, + want: []client.Object{ + &appsv1.StatefulSet{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + Spec: appsv1.StatefulSetSpec{ + ServiceName: "test-collector", + Replicas: &one, + Selector: &metav1.LabelSelector{ + MatchLabels: selectorLabels, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-operator-config/sha256": "42773025f65feaf30df59a306a9e38f1aaabe94c8310983beaddb7f648d699b0", + "prometheus.io/path": "/metrics", + "prometheus.io/port": "8888", + "prometheus.io/scrape": "true", + }, + }, + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: "otc-internal", + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-collector-" + goodConfigHash, + }, + Items: []corev1.KeyToPath{ + { + Key: "collector.yaml", + Path: "collector.yaml", + }, + }, + }, + }, + }, + }, + Containers: []corev1.Container{ + { + Name: "otc-container", + Image: "test", + Args: []string{ + "--config=/conf/collector.yaml", + }, + Env: []corev1.EnvVar{ + { + Name: "POD_NAME", + ValueFrom: &corev1.EnvVarSource{ + FieldRef: &corev1.ObjectFieldSelector{ + FieldPath: "metadata.name", + }, + }, + }, + { + Name: "SHARD", + Value: "0", + }, + }, + Ports: []corev1.ContainerPort{ + { + Name: "metrics", + HostPort: 0, + ContainerPort: 8888, + Protocol: "TCP", + }, + }, + VolumeMounts: []corev1.VolumeMount{ + { + Name: "otc-internal", + MountPath: "/conf", + }, + }, + }, + }, + ShareProcessNamespace: ptr.To(false), + DNSPolicy: "ClusterFirst", + DNSConfig: &corev1.PodDNSConfig{}, + ServiceAccountName: "test-collector", + }, + }, + PodManagementPolicy: "Parallel", + }, + }, + &policyV1.PodDisruptionBudget{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + Spec: policyV1.PodDisruptionBudgetSpec{ + Selector: &v1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + MaxUnavailable: &intstr.IntOrString{ + Type: intstr.Int, + IntVal: 1, + }, + }, + }, + &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector-" + goodConfigHash, + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + Data: map[string]string{ + "collector.yaml": "exporters:\n debug: null\nreceivers:\n prometheus:\n config: {}\n target_allocator:\n collector_id: ${POD_NAME}\n endpoint: http://test-targetallocator:80\n interval: 30s\nservice:\n pipelines:\n metrics:\n exporters:\n - debug\n receivers:\n - prometheus\n", + }, + }, + &corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{}, + }, + }, + &corev1.Service{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector-monitoring", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-collector-monitoring", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + "operator.opentelemetry.io/collector-service-type": "monitoring", + "operator.opentelemetry.io/collector-monitoring-service": "Exists", + }, + Annotations: map[string]string{}, + }, + Spec: corev1.ServiceSpec{ + Ports: []corev1.ServicePort{ + { + Name: "monitoring", + Port: 8888, + }, + }, + Selector: selectorLabels, + }, + }, + &v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "test", + Labels: nil, + }, + Spec: v1alpha1.TargetAllocatorSpec{ + FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig, + PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{ + Enabled: true, + }, + Observability: v1beta1.ObservabilitySpec{ + Metrics: v1beta1.MetricsConfigSpec{ + EnableMetrics: true, + }, + }, + }, + }, + }, + wantErr: false, + featuregates: []*colfeaturegate.Gate{}, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + opts := []config.Option{ + config.WithCollectorImage("default-collector"), + config.WithTargetAllocatorImage("default-ta-allocator"), + } + opts = append(opts, tt.opts...) + cfg := config.New( + opts..., + ) + params := manifests.Params{ + Log: logr.Discard(), + Config: cfg, + OtelCol: tt.args.instance, + } + targetAllocator, err := collector.TargetAllocator(params) + require.NoError(t, err) + params.TargetAllocator = targetAllocator + featuregates := []*colfeaturegate.Gate{featuregate.CollectorUsesTargetAllocatorCR} + featuregates = append(featuregates, tt.featuregates...) + registry := colfeaturegate.GlobalRegistry() + for _, gate := range featuregates { + current := gate.IsEnabled() + require.False(t, current, "only enable gates which are disabled by default") + if setErr := registry.Set(gate.ID(), true); setErr != nil { + require.NoError(t, setErr) + return + } + t.Cleanup(func() { + setErr := registry.Set(gate.ID(), current) + require.NoError(t, setErr) + }) + } + got, err := BuildCollector(params) + if (err != nil) != tt.wantErr { + t.Errorf("BuildAll() error = %v, wantErr %v", err, tt.wantErr) + return + } + require.Equal(t, tt.want, got) + + }) + } +} + +func TestBuildTargetAllocator(t *testing.T) { + type args struct { + instance v1alpha1.TargetAllocator + collector *v1beta1.OpenTelemetryCollector + } + tests := []struct { + name string + args args + want []client.Object + featuregates []*colfeaturegate.Gate + wantErr bool + opts []config.Option + }{ + { + name: "base case", + args: args{ + instance: v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "test", + Labels: nil, + }, + Spec: v1alpha1.TargetAllocatorSpec{ + FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig, + ScrapeConfigs: []v1beta1.AnyConfig{ + {Object: map[string]any{ + "job_name": "example", + "metric_relabel_configs": []any{ + map[string]any{ + "replacement": "$1_$2", + "source_labels": []any{"job"}, + "target_label": "job", + }, + }, + "relabel_configs": []any{ + map[string]any{ + "replacement": "my_service_$1", + "source_labels": []any{"__meta_service_id"}, + "target_label": "job", + }, + map[string]any{ + "replacement": "$1", + "source_labels": []any{"__meta_service_name"}, + "target_label": "instance", + }, + }, + }}, + }, + PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{ + Enabled: true, + }, + }, + }, + }, + want: []client.Object{ + &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Data: map[string]string{ + "targetallocator.yaml": `allocation_strategy: consistent-hashing +collector_selector: null +config: + scrape_configs: + - job_name: example + metric_relabel_configs: + - replacement: $1_$2 + source_labels: + - job + target_label: job + relabel_configs: + - replacement: my_service_$1 + source_labels: + - __meta_service_id + target_label: job + - replacement: $1 + source_labels: + - __meta_service_name + target_label: instance +filter_strategy: relabel-config +prometheus_cr: + enabled: true + pod_monitor_selector: null + service_monitor_selector: null +`, + }, + }, + &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: taSelectorLabels, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-targetallocator-config/hash": "88ab06aab167d58ae2316ddecc9cf0600b9094d27054781dd6aa6e44dcf902fc", + }, + }, + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: "ta-internal", + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-targetallocator", + }, + Items: []corev1.KeyToPath{ + { + Key: "targetallocator.yaml", + Path: "targetallocator.yaml", + }, + }, + }, + }, + }, + }, + Containers: []corev1.Container{ + { + Name: "ta-container", + Image: "default-ta-allocator", + Env: []corev1.EnvVar{ + { + Name: "OTELCOL_NAMESPACE", + ValueFrom: &corev1.EnvVarSource{ + FieldRef: &corev1.ObjectFieldSelector{ + FieldPath: "metadata.namespace", + }, + }, + }, + }, + Ports: []corev1.ContainerPort{ + { + Name: "http", + HostPort: 0, + ContainerPort: 8080, + Protocol: "TCP", + }, + }, + VolumeMounts: []corev1.VolumeMount{ + { + Name: "ta-internal", + MountPath: "/conf", + }, + }, + LivenessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/livez", + Port: intstr.FromInt(8080), + }, + }, + }, + ReadinessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/readyz", + Port: intstr.FromInt(8080), + }, + }, + }, + }, + }, + DNSPolicy: "ClusterFirst", + DNSConfig: &corev1.PodDNSConfig{}, + ShareProcessNamespace: ptr.To(false), + ServiceAccountName: "test-targetallocator", + }, + }, + }, + }, + &corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + }, + &corev1.Service{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Spec: corev1.ServiceSpec{ + Ports: []corev1.ServicePort{ + { + Name: "targetallocation", + Port: 80, + TargetPort: intstr.IntOrString{ + Type: 1, + StrVal: "http", + }, + }, + }, + Selector: taSelectorLabels, + }, + }, + &policyV1.PodDisruptionBudget{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-targetallocator-config/hash": "88ab06aab167d58ae2316ddecc9cf0600b9094d27054781dd6aa6e44dcf902fc", + }, + }, + Spec: policyV1.PodDisruptionBudgetSpec{ + Selector: &v1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + MaxUnavailable: &intstr.IntOrString{ + Type: intstr.Int, + IntVal: 1, + }, + }, + }, + }, + wantErr: false, + }, + { + name: "enable metrics case", + args: args{ + instance: v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "test", + Labels: nil, + }, + Spec: v1alpha1.TargetAllocatorSpec{ + FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig, + ScrapeConfigs: []v1beta1.AnyConfig{ + {Object: map[string]any{ + "job_name": "example", + "metric_relabel_configs": []any{ + map[string]any{ + "replacement": "$1_$2", + "source_labels": []any{"job"}, + "target_label": "job", + }, + }, + "relabel_configs": []any{ + map[string]any{ + "replacement": "my_service_$1", + "source_labels": []any{"__meta_service_id"}, + "target_label": "job", + }, + map[string]any{ + "replacement": "$1", + "source_labels": []any{"__meta_service_name"}, + "target_label": "instance", + }, + }, + }}, + }, + PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{ + Enabled: true, + }, + AllocationStrategy: v1beta1.TargetAllocatorAllocationStrategyConsistentHashing, + Observability: v1beta1.ObservabilitySpec{ + Metrics: v1beta1.MetricsConfigSpec{ + EnableMetrics: true, + }, + }, + }, + }, + }, + want: []client.Object{ + &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Data: map[string]string{ + "targetallocator.yaml": `allocation_strategy: consistent-hashing +collector_selector: null +config: + scrape_configs: + - job_name: example + metric_relabel_configs: + - replacement: $1_$2 + source_labels: + - job + target_label: job + relabel_configs: + - replacement: my_service_$1 + source_labels: + - __meta_service_id + target_label: job + - replacement: $1 + source_labels: + - __meta_service_name + target_label: instance +filter_strategy: relabel-config +prometheus_cr: + enabled: true + pod_monitor_selector: null + service_monitor_selector: null +`, + }, + }, + &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: taSelectorLabels, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-targetallocator-config/hash": "88ab06aab167d58ae2316ddecc9cf0600b9094d27054781dd6aa6e44dcf902fc", + }, + }, + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: "ta-internal", + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-targetallocator", + }, + Items: []corev1.KeyToPath{ + { + Key: "targetallocator.yaml", + Path: "targetallocator.yaml", + }, + }, + }, + }, + }, + }, + Containers: []corev1.Container{ + { + Name: "ta-container", + Image: "default-ta-allocator", + Env: []corev1.EnvVar{ + { + Name: "OTELCOL_NAMESPACE", + ValueFrom: &corev1.EnvVarSource{ + FieldRef: &corev1.ObjectFieldSelector{ + FieldPath: "metadata.namespace", + }, + }, + }, + }, + Ports: []corev1.ContainerPort{ + { + Name: "http", + HostPort: 0, + ContainerPort: 8080, + Protocol: "TCP", + }, + }, + VolumeMounts: []corev1.VolumeMount{ + { + Name: "ta-internal", + MountPath: "/conf", + }, + }, + LivenessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/livez", + Port: intstr.FromInt(8080), + }, + }, + }, + ReadinessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/readyz", + Port: intstr.FromInt(8080), + }, + }, + }, + }, + }, + ShareProcessNamespace: ptr.To(false), + DNSPolicy: "ClusterFirst", + DNSConfig: &corev1.PodDNSConfig{}, + ServiceAccountName: "test-targetallocator", + }, + }, + }, + }, + &corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + }, + &corev1.Service{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Spec: corev1.ServiceSpec{ + Ports: []corev1.ServicePort{ + { + Name: "targetallocation", + Port: 80, + TargetPort: intstr.IntOrString{ + Type: 1, + StrVal: "http", + }, + }, + }, + Selector: taSelectorLabels, + }, + }, + &policyV1.PodDisruptionBudget{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-targetallocator-config/hash": "88ab06aab167d58ae2316ddecc9cf0600b9094d27054781dd6aa6e44dcf902fc", + }, + }, + Spec: policyV1.PodDisruptionBudgetSpec{ + Selector: &v1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + MaxUnavailable: &intstr.IntOrString{ + Type: intstr.Int, + IntVal: 1, + }, + }, + }, + &monitoringv1.ServiceMonitor{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Spec: monitoringv1.ServiceMonitorSpec{ + Endpoints: []monitoringv1.Endpoint{ + {Port: "targetallocation"}, + }, + Selector: v1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + NamespaceSelector: monitoringv1.NamespaceSelector{ + MatchNames: []string{"test"}, + }, + }, + }, + }, + wantErr: false, + }, + { + name: "collector present", + args: args{ + instance: v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "test", + Labels: nil, + }, + Spec: v1alpha1.TargetAllocatorSpec{ + FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig, + PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{ + Enabled: true, + }, + }, + }, + collector: &v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "test", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Config: v1beta1.Config{ + Receivers: v1beta1.AnyConfig{ + Object: map[string]any{ + "prometheus": map[string]any{ + "config": map[string]any{ + "scrape_configs": []any{ + map[string]any{ + "job_name": "example", + "metric_relabel_configs": []any{ + map[string]any{ + "replacement": "$1_$2", + "source_labels": []any{"job"}, + "target_label": "job", + }, + }, + "relabel_configs": []any{ + map[string]any{ + "replacement": "my_service_$1", + "source_labels": []any{"__meta_service_id"}, + "target_label": "job", + }, + map[string]any{ + "replacement": "$1", + "source_labels": []any{"__meta_service_name"}, + "target_label": "instance", + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + want: []client.Object{ + &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Data: map[string]string{ + "targetallocator.yaml": `allocation_strategy: consistent-hashing +collector_selector: + matchlabels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: test.test + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/part-of: opentelemetry + matchexpressions: [] +config: + scrape_configs: + - job_name: example + metric_relabel_configs: + - replacement: $1_$2 + source_labels: + - job + target_label: job + relabel_configs: + - replacement: my_service_$1 + source_labels: + - __meta_service_id + target_label: job + - replacement: $1 + source_labels: + - __meta_service_name + target_label: instance +filter_strategy: relabel-config +prometheus_cr: + enabled: true + pod_monitor_selector: null + service_monitor_selector: null +`, + }, + }, + &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: taSelectorLabels, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-targetallocator-config/hash": "9d78d2ecfad18bad24dec7e9a825b4ce45657ecbb2e6b32845b585b7c15ea407", + }, + }, + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: "ta-internal", + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-targetallocator", + }, + Items: []corev1.KeyToPath{ + { + Key: "targetallocator.yaml", + Path: "targetallocator.yaml", + }, + }, + }, + }, + }, + }, + Containers: []corev1.Container{ + { + Name: "ta-container", + Image: "default-ta-allocator", + Env: []corev1.EnvVar{ + { + Name: "OTELCOL_NAMESPACE", + ValueFrom: &corev1.EnvVarSource{ + FieldRef: &corev1.ObjectFieldSelector{ + FieldPath: "metadata.namespace", + }, + }, + }, + }, + Ports: []corev1.ContainerPort{ + { + Name: "http", + HostPort: 0, + ContainerPort: 8080, + Protocol: "TCP", + }, + }, + VolumeMounts: []corev1.VolumeMount{ + { + Name: "ta-internal", + MountPath: "/conf", + }, + }, + LivenessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/livez", + Port: intstr.FromInt(8080), + }, + }, + }, + ReadinessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/readyz", + Port: intstr.FromInt(8080), + }, + }, + }, + }, + }, + DNSPolicy: "ClusterFirst", + DNSConfig: &corev1.PodDNSConfig{}, + ShareProcessNamespace: ptr.To(false), + ServiceAccountName: "test-targetallocator", + }, + }, + }, + }, + &corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + }, + &corev1.Service{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Spec: corev1.ServiceSpec{ + Ports: []corev1.ServicePort{ + { + Name: "targetallocation", + Port: 80, + TargetPort: intstr.IntOrString{ + Type: 1, + StrVal: "http", + }, + }, + }, + Selector: taSelectorLabels, + }, + }, + &policyV1.PodDisruptionBudget{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-targetallocator-config/hash": "9d78d2ecfad18bad24dec7e9a825b4ce45657ecbb2e6b32845b585b7c15ea407", + }, + }, + Spec: policyV1.PodDisruptionBudgetSpec{ + Selector: &v1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + MaxUnavailable: &intstr.IntOrString{ + Type: intstr.Int, + IntVal: 1, + }, + }, + }, + }, + wantErr: false, + }, + { + name: "mtls", + args: args{ + instance: v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "test", + Labels: nil, + }, + Spec: v1alpha1.TargetAllocatorSpec{ + FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig, + PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{ + Enabled: true, + }, + }, + }, + collector: &v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "test", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Config: v1beta1.Config{ + Receivers: v1beta1.AnyConfig{ + Object: map[string]any{ + "prometheus": map[string]any{ + "config": map[string]any{ + "scrape_configs": []any{ + map[string]any{ + "job_name": "example", + "metric_relabel_configs": []any{ + map[string]any{ + "replacement": "$1_$2", + "source_labels": []any{"job"}, + "target_label": "job", + }, + }, + "relabel_configs": []any{ + map[string]any{ + "replacement": "my_service_$1", + "source_labels": []any{"__meta_service_id"}, + "target_label": "job", + }, + map[string]any{ + "replacement": "$1", + "source_labels": []any{"__meta_service_name"}, + "target_label": "instance", + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + want: []client.Object{ + &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Data: map[string]string{ + "targetallocator.yaml": `allocation_strategy: consistent-hashing +collector_selector: + matchlabels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: test.test + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/part-of: opentelemetry + matchexpressions: [] +config: + scrape_configs: + - job_name: example + metric_relabel_configs: + - replacement: $1_$2 + source_labels: + - job + target_label: job + relabel_configs: + - replacement: my_service_$1 + source_labels: + - __meta_service_id + target_label: job + - replacement: $1 + source_labels: + - __meta_service_name + target_label: instance +filter_strategy: relabel-config +https: + ca_file_path: /tls/ca.crt + enabled: true + listen_addr: :8443 + tls_cert_file_path: /tls/tls.crt + tls_key_file_path: /tls/tls.key +prometheus_cr: + enabled: true + pod_monitor_selector: null + service_monitor_selector: null +`, + }, + }, + &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: taSelectorLabels, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-targetallocator-config/hash": "f1ce0fdbf69924576576d1d6eb2a3cc91a3f72675b3facbb36702d57027bc6ae", + }, + }, + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: "ta-internal", + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-targetallocator", + }, + Items: []corev1.KeyToPath{ + { + Key: "targetallocator.yaml", + Path: "targetallocator.yaml", + }, + }, + }, + }, + }, + { + Name: "test-ta-server-cert", + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: "test-ta-server-cert", + }, + }, + }, + }, + Containers: []corev1.Container{ + { + Name: "ta-container", + Image: "default-ta-allocator", + Env: []corev1.EnvVar{ + { + Name: "OTELCOL_NAMESPACE", + ValueFrom: &corev1.EnvVarSource{ + FieldRef: &corev1.ObjectFieldSelector{ + FieldPath: "metadata.namespace", + }, + }, + }, + }, + Ports: []corev1.ContainerPort{ + { + Name: "http", + HostPort: 0, + ContainerPort: 8080, + Protocol: "TCP", + }, + { + Name: "https", + HostPort: 0, + ContainerPort: 8443, + Protocol: "TCP", + }, + }, + VolumeMounts: []corev1.VolumeMount{ + { + Name: "ta-internal", + MountPath: "/conf", + }, + { + Name: "test-ta-server-cert", + MountPath: "/tls", + }, + }, + LivenessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/livez", + Port: intstr.FromInt(8080), + }, + }, + }, + ReadinessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/readyz", + Port: intstr.FromInt(8080), + }, + }, + }, + }, + }, + DNSPolicy: "ClusterFirst", + DNSConfig: &corev1.PodDNSConfig{}, + ShareProcessNamespace: ptr.To(false), + ServiceAccountName: "test-targetallocator", + }, + }, + }, + }, + &corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + }, + &corev1.Service{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: nil, + }, + Spec: corev1.ServiceSpec{ + Ports: []corev1.ServicePort{ + { + Name: "targetallocation", + Port: 80, + TargetPort: intstr.IntOrString{ + Type: 1, + StrVal: "http", + }, + }, + { + Name: "targetallocation-https", + Port: 443, + TargetPort: intstr.IntOrString{ + Type: 1, + StrVal: "https", + }, + }, + }, + Selector: taSelectorLabels, + }, + }, + &policyV1.PodDisruptionBudget{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + Annotations: map[string]string{ + "opentelemetry-targetallocator-config/hash": "f1ce0fdbf69924576576d1d6eb2a3cc91a3f72675b3facbb36702d57027bc6ae", + }, + }, + Spec: policyV1.PodDisruptionBudgetSpec{ + Selector: &v1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-targetallocator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + MaxUnavailable: &intstr.IntOrString{ + Type: intstr.Int, + IntVal: 1, + }, + }, + }, + &cmv1.Issuer{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-self-signed-issuer", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-self-signed-issuer", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + Spec: cmv1.IssuerSpec{ + IssuerConfig: cmv1.IssuerConfig{ + SelfSigned: &cmv1.SelfSignedIssuer{ + CRLDistributionPoints: nil, + }, + }, + }, + }, + &cmv1.Certificate{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-ca-cert", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-ca-cert", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + Spec: cmv1.CertificateSpec{ + Subject: &cmv1.X509Subject{ + OrganizationalUnits: []string{"opentelemetry-operator"}, + }, + CommonName: "test-ca-cert", + IsCA: true, + SecretName: "test-ca-cert", + IssuerRef: cmmetav1.ObjectReference{ + Name: "test-self-signed-issuer", + Kind: "Issuer", + }, + }, + }, + &cmv1.Issuer{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-ca-issuer", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-ca-issuer", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + Spec: cmv1.IssuerSpec{ + IssuerConfig: cmv1.IssuerConfig{ + CA: &cmv1.CAIssuer{ + SecretName: "test-ca-cert", + }, + }, + }, + }, + &cmv1.Certificate{ + TypeMeta: metav1.TypeMeta{}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-ta-server-cert", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-ta-server-cert", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + Spec: cmv1.CertificateSpec{ + Subject: &cmv1.X509Subject{ + OrganizationalUnits: []string{"opentelemetry-operator"}, + }, + DNSNames: []string{ + "test-targetallocator", + "test-targetallocator.test.svc", + "test-targetallocator.test.svc.cluster.local", + }, + SecretName: "test-ta-server-cert", + IssuerRef: cmmetav1.ObjectReference{ + Name: "test-ca-issuer", + Kind: "Issuer", + }, + Usages: []cmv1.KeyUsage{ + "client auth", + "server auth", + }, + }, + }, + &cmv1.Certificate{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-ta-client-cert", + Namespace: "test", + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/instance": "test.test", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/name": "test-ta-client-cert", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/version": "latest", + }, + }, + Spec: cmv1.CertificateSpec{ + Subject: &cmv1.X509Subject{ + OrganizationalUnits: []string{"opentelemetry-operator"}, + }, + DNSNames: []string{ + "test-targetallocator", + "test-targetallocator.test.svc", + "test-targetallocator.test.svc.cluster.local", + }, + SecretName: "test-ta-client-cert", + IssuerRef: cmmetav1.ObjectReference{ + Name: "test-ca-issuer", + Kind: "Issuer", + }, + Usages: []cmv1.KeyUsage{ + "client auth", + "server auth", + }, + }, + }, + }, + wantErr: false, + opts: []config.Option{ + config.WithCertManagerAvailability(certmanager.Available), + }, + featuregates: []*colfeaturegate.Gate{featuregate.EnableTargetAllocatorMTLS}, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + opts := []config.Option{ + config.WithCollectorImage("default-collector"), + config.WithTargetAllocatorImage("default-ta-allocator"), + } + opts = append(opts, tt.opts...) + cfg := config.New( + opts..., + ) + params := targetallocator.Params{ + Log: logr.Discard(), + Config: cfg, + TargetAllocator: tt.args.instance, + Collector: tt.args.collector, + } + registry := colfeaturegate.GlobalRegistry() + for _, gate := range tt.featuregates { + current := gate.IsEnabled() + require.False(t, current, "only enable gates which are disabled by default") + if err := registry.Set(gate.ID(), true); err != nil { + require.NoError(t, err) + return + } + t.Cleanup(func() { + err := registry.Set(gate.ID(), current) + require.NoError(t, err) + }) + } + got, err := BuildTargetAllocator(params) if (err != nil) != tt.wantErr { t.Errorf("BuildAll() error = %v, wantErr %v", err, tt.wantErr) return diff --git a/controllers/common.go b/controllers/common.go index 3003907913..25bdc0c432 100644 --- a/controllers/common.go +++ b/controllers/common.go @@ -35,6 +35,7 @@ import ( "github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/opampbridge" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) func isNamespaceScoped(obj client.Object) bool { @@ -59,22 +60,26 @@ func BuildCollector(params manifests.Params) ([]client.Object, error) { } resources = append(resources, objs...) } - // TODO: Remove this after TargetAllocator CRD is reconciled - if params.TargetAllocator != nil { - taParams := targetallocator.Params{ - Client: params.Client, - Scheme: params.Scheme, - Recorder: params.Recorder, - Log: params.Log, - Config: params.Config, - Collector: ¶ms.OtelCol, - TargetAllocator: *params.TargetAllocator, - } - taResources, err := BuildTargetAllocator(taParams) - if err != nil { - return nil, err + // If we're not building a TargetAllocator CRD, then we need to separately invoke its builder + // to directly build the manifests. This is what used to happen before the TargetAllocator CRD + // was introduced. + if !featuregate.CollectorUsesTargetAllocatorCR.IsEnabled() { + if params.TargetAllocator != nil { + taParams := targetallocator.Params{ + Client: params.Client, + Scheme: params.Scheme, + Recorder: params.Recorder, + Log: params.Log, + Config: params.Config, + Collector: ¶ms.OtelCol, + TargetAllocator: *params.TargetAllocator, + } + taResources, err := BuildTargetAllocator(taParams) + if err != nil { + return nil, err + } + resources = append(resources, taResources...) } - resources = append(resources, taResources...) } return resources, nil } @@ -155,7 +160,7 @@ func reconcileDesiredObjects(ctx context.Context, kubeClient client.Client, logg op = result return createOrUpdateErr }) - if crudErr != nil && errors.Is(crudErr, manifests.ImmutableChangeErr) { + if crudErr != nil && errors.As(crudErr, &manifests.ImmutableChangeErr) { l.Error(crudErr, "detected immutable field change, trying to delete, new object will be created on next reconcile", "existing", existing.GetName()) delErr := kubeClient.Delete(ctx, existing) if delErr != nil { diff --git a/controllers/opentelemetrycollector_controller.go b/controllers/opentelemetrycollector_controller.go index 8c616700a6..1f0211f932 100644 --- a/controllers/opentelemetrycollector_controller.go +++ b/controllers/opentelemetrycollector_controller.go @@ -38,6 +38,7 @@ import ( "sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus" @@ -46,7 +47,9 @@ import ( "github.com/open-telemetry/opentelemetry-operator/internal/manifests" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" + internalRbac "github.com/open-telemetry/opentelemetry-operator/internal/rbac" collectorStatus "github.com/open-telemetry/opentelemetry-operator/internal/status/collector" + "github.com/open-telemetry/opentelemetry-operator/pkg/constants" "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) @@ -64,6 +67,7 @@ type OpenTelemetryCollectorReconciler struct { scheme *runtime.Scheme log logr.Logger config config.Config + reviewer *internalRbac.Reviewer } // Params is the set of options to build a new OpenTelemetryCollectorReconciler. @@ -73,6 +77,7 @@ type Params struct { Scheme *runtime.Scheme Log logr.Logger Config config.Config + Reviewer *internalRbac.Reviewer } func (r *OpenTelemetryCollectorReconciler) findOtelOwnedObjects(ctx context.Context, params manifests.Params) (map[types.UID]client.Object, error) { @@ -168,7 +173,7 @@ func (r *OpenTelemetryCollectorReconciler) getConfigMapsToRemove(configVersionsT return ownedConfigMaps } -func (r *OpenTelemetryCollectorReconciler) GetParams(instance v1beta1.OpenTelemetryCollector) (manifests.Params, error) { +func (r *OpenTelemetryCollectorReconciler) GetParams(ctx context.Context, instance v1beta1.OpenTelemetryCollector) (manifests.Params, error) { p := manifests.Params{ Config: r.config, Client: r.Client, @@ -176,10 +181,11 @@ func (r *OpenTelemetryCollectorReconciler) GetParams(instance v1beta1.OpenTeleme Log: r.log, Scheme: r.scheme, Recorder: r.recorder, + Reviewer: r.reviewer, } // generate the target allocator CR from the collector CR - targetAllocator, err := collector.TargetAllocator(p) + targetAllocator, err := r.getTargetAllocator(ctx, p) if err != nil { return p, err } @@ -187,6 +193,19 @@ func (r *OpenTelemetryCollectorReconciler) GetParams(instance v1beta1.OpenTeleme return p, nil } +func (r *OpenTelemetryCollectorReconciler) getTargetAllocator(ctx context.Context, params manifests.Params) (*v1alpha1.TargetAllocator, error) { + if taName, ok := params.OtelCol.GetLabels()[constants.LabelTargetAllocator]; ok { + targetAllocator := &v1alpha1.TargetAllocator{} + taKey := client.ObjectKey{Name: taName, Namespace: params.OtelCol.GetNamespace()} + err := r.Client.Get(ctx, taKey, targetAllocator) + if err != nil { + return nil, err + } + return targetAllocator, nil + } + return collector.TargetAllocator(params) +} + // NewReconciler creates a new reconciler for OpenTelemetryCollector objects. func NewReconciler(p Params) *OpenTelemetryCollectorReconciler { r := &OpenTelemetryCollectorReconciler{ @@ -195,6 +214,7 @@ func NewReconciler(p Params) *OpenTelemetryCollectorReconciler { scheme: p.Scheme, config: p.Config, recorder: p.Recorder, + reviewer: p.Reviewer, } return r } @@ -212,6 +232,7 @@ func NewReconciler(p Params) *OpenTelemetryCollectorReconciler { // +kubebuilder:rbac:groups=opentelemetry.io,resources=opentelemetrycollectors,verbs=get;list;watch;update;patch // +kubebuilder:rbac:groups=opentelemetry.io,resources=opentelemetrycollectors/status,verbs=get;update;patch // +kubebuilder:rbac:groups=opentelemetry.io,resources=opentelemetrycollectors/finalizers,verbs=get;update;patch +// +kubebuilder:rbac:groups=opentelemetry.io,resources=targetallocators,verbs=get;list;watch;create;update;patch;delete // Reconcile the current state of an OpenTelemetry collector resource with the desired state. func (r *OpenTelemetryCollectorReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { @@ -229,7 +250,7 @@ func (r *OpenTelemetryCollectorReconciler) Reconcile(ctx context.Context, req ct return ctrl.Result{}, client.IgnoreNotFound(err) } - params, err := r.GetParams(instance) + params, err := r.GetParams(ctx, instance) if err != nil { log.Error(err, "Failed to create manifest.Params") return ctrl.Result{}, err diff --git a/controllers/reconcile_test.go b/controllers/reconcile_test.go index 46b0d38837..a0d6fc3bed 100644 --- a/controllers/reconcile_test.go +++ b/controllers/reconcile_test.go @@ -22,7 +22,7 @@ import ( routev1 "github.com/openshift/api/route/v1" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" - "gopkg.in/yaml.v2" + colfeaturegate "go.opentelemetry.io/collector/featuregate" appsv1 "k8s.io/api/apps/v1" autoscalingv2 "k8s.io/api/autoscaling/v2" v1 "k8s.io/api/core/v1" @@ -41,14 +41,15 @@ import ( k8sreconcile "sigs.k8s.io/controller-runtime/pkg/reconcile" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" + "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" "github.com/open-telemetry/opentelemetry-operator/controllers" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus" autoRBAC "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/rbac" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/manifests" - ta "github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator/adapters" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) const ( @@ -75,6 +76,18 @@ var ( type check[T any] func(t *testing.T, params T) func TestOpenTelemetryCollectorReconciler_Reconcile(t *testing.T) { + // enable the collector CR feature flag, as these tests assume it + // TODO: drop this after the flag is enabled by default + registry := colfeaturegate.GlobalRegistry() + current := featuregate.CollectorUsesTargetAllocatorCR.IsEnabled() + require.False(t, current, "don't set gates which are enabled by default") + err := registry.Set(featuregate.CollectorUsesTargetAllocatorCR.ID(), true) + require.NoError(t, err) + t.Cleanup(func() { + err := registry.Set(featuregate.CollectorUsesTargetAllocatorCR.ID(), current) + require.NoError(t, err) + }) + addedMetadataDeployment := testCollectorWithMode("test-deployment", v1alpha1.ModeDeployment) addedMetadataDeployment.Labels = map[string]string{ labelName: labelVal, @@ -496,10 +509,7 @@ func TestOpenTelemetryCollectorReconciler_Reconcile(t *testing.T) { assert.NoError(t, err) assert.True(t, exists) // Check the TA doesn't exist - exists, err = populateObjectIfExists(t, &v1.ConfigMap{}, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace)) - assert.NoError(t, err) - assert.False(t, exists) - exists, err = populateObjectIfExists(t, &appsv1.Deployment{}, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace)) + exists, err = populateObjectIfExists(t, &v1alpha1.TargetAllocator{}, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace)) assert.NoError(t, err) assert.False(t, exists) }, @@ -516,34 +526,35 @@ func TestOpenTelemetryCollectorReconciler_Reconcile(t *testing.T) { exists, err := populateObjectIfExists(t, &v1.ConfigMap{}, namespacedObjectName(naming.ConfigMap(params.Name, configHash), params.Namespace)) assert.NoError(t, err) assert.True(t, exists) - actual := v1.ConfigMap{} - exists, err = populateObjectIfExists(t, &appsv1.Deployment{}, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace)) - assert.NoError(t, err) - assert.True(t, exists) - exists, err = populateObjectIfExists(t, &actual, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace)) - assert.NoError(t, err) - assert.True(t, exists) - exists, err = populateObjectIfExists(t, &v1.ServiceAccount{}, namespacedObjectName(naming.TargetAllocatorServiceAccount(params.Name), params.Namespace)) - assert.NoError(t, err) - assert.True(t, exists) - promConfig, err := ta.ConfigToPromConfig(testCollectorAssertNoErr(t, "test-stateful-ta", baseTaImage, promFile).Spec.Config) - assert.NoError(t, err) - - taConfig := make(map[interface{}]interface{}) - taConfig["collector_selector"] = metav1.LabelSelector{ - MatchLabels: map[string]string{ - "app.kubernetes.io/instance": "default.test-stateful-ta", - "app.kubernetes.io/managed-by": "opentelemetry-operator", - "app.kubernetes.io/component": "opentelemetry-collector", - "app.kubernetes.io/part-of": "opentelemetry", + actual := v1alpha1.TargetAllocator{} + exists, err = populateObjectIfExists(t, &actual, namespacedObjectName(params.Name, params.Namespace)) + require.NoError(t, err) + require.True(t, exists) + expected := v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: params.Name, + Namespace: params.Namespace, + Labels: nil, + }, + Spec: v1alpha1.TargetAllocatorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{}, + AllocationStrategy: "consistent-hashing", + FilterStrategy: "relabel-config", + PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{ + ScrapeInterval: &metav1.Duration{Duration: time.Second * 30}, + ServiceMonitorSelector: &metav1.LabelSelector{}, + PodMonitorSelector: &metav1.LabelSelector{}, + }, }, } - taConfig["config"] = promConfig["config"] - taConfig["allocation_strategy"] = "consistent-hashing" - taConfig["filter_strategy"] = "relabel-config" - taConfigYAML, _ := yaml.Marshal(taConfig) - assert.Equal(t, string(taConfigYAML), actual.Data["targetallocator.yaml"]) - assert.NotContains(t, actual.Data["targetallocator.yaml"], "0.0.0.0:10100") + assert.Equal(t, expected.Name, actual.Name) + assert.Equal(t, expected.Namespace, actual.Namespace) + assert.Equal(t, expected.Labels, actual.Labels) + assert.Equal(t, baseTaImage, actual.Spec.Image) + assert.Equal(t, expected.Spec.AllocationStrategy, actual.Spec.AllocationStrategy) + assert.Equal(t, expected.Spec.FilterStrategy, actual.Spec.FilterStrategy) + assert.Equal(t, expected.Spec.ScrapeConfigs, actual.Spec.ScrapeConfigs) + }, }, wantErr: assert.NoError, @@ -558,14 +569,11 @@ func TestOpenTelemetryCollectorReconciler_Reconcile(t *testing.T) { exists, err := populateObjectIfExists(t, &v1.ConfigMap{}, namespacedObjectName(naming.ConfigMap(params.Name, configHash), params.Namespace)) assert.NoError(t, err) assert.True(t, exists) - actual := v1.ConfigMap{} - exists, err = populateObjectIfExists(t, &appsv1.Deployment{}, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace)) - assert.NoError(t, err) - assert.True(t, exists) - exists, err = populateObjectIfExists(t, &actual, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace)) - assert.NoError(t, err) - assert.True(t, exists) - assert.Contains(t, actual.Data["targetallocator.yaml"], "0.0.0.0:10100") + actual := v1alpha1.TargetAllocator{} + exists, err = populateObjectIfExists(t, &actual, namespacedObjectName(params.Name, params.Namespace)) + require.NoError(t, err) + require.True(t, exists) + assert.Nil(t, actual.Spec.ScrapeConfigs) }, }, wantErr: assert.NoError, @@ -575,11 +583,11 @@ func TestOpenTelemetryCollectorReconciler_Reconcile(t *testing.T) { result: controllerruntime.Result{}, checks: []check[v1alpha1.OpenTelemetryCollector]{ func(t *testing.T, params v1alpha1.OpenTelemetryCollector) { - actual := appsv1.Deployment{} - exists, err := populateObjectIfExists(t, &actual, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace)) - assert.NoError(t, err) - assert.True(t, exists) - assert.Equal(t, actual.Spec.Template.Spec.Containers[0].Image, updatedTaImage) + actual := v1alpha1.TargetAllocator{} + exists, err := populateObjectIfExists(t, &actual, namespacedObjectName(params.Name, params.Namespace)) + require.NoError(t, err) + require.True(t, exists) + assert.Equal(t, actual.Spec.Image, updatedTaImage) }, }, wantErr: assert.NoError, diff --git a/controllers/suite_test.go b/controllers/suite_test.go index 4e56fb16de..1dc118d9dd 100644 --- a/controllers/suite_test.go +++ b/controllers/suite_test.go @@ -55,6 +55,7 @@ import ( "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus" autoRBAC "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/rbac" @@ -63,7 +64,6 @@ import ( "github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector/testdata" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" "github.com/open-telemetry/opentelemetry-operator/internal/rbac" - // +kubebuilder:scaffold:imports ) var ( @@ -100,6 +100,7 @@ type mockAutoDetect struct { OpenShiftRoutesAvailabilityFunc func() (openshift.RoutesAvailability, error) PrometheusCRsAvailabilityFunc func() (prometheus.Availability, error) RBACPermissionsFunc func(ctx context.Context) (autoRBAC.Availability, error) + CertManagerAvailabilityFunc func(ctx context.Context) (certmanager.Availability, error) } func (m *mockAutoDetect) FIPSEnabled(ctx context.Context) bool { @@ -127,6 +128,13 @@ func (m *mockAutoDetect) RBACPermissions(ctx context.Context) (autoRBAC.Availabi return autoRBAC.NotAvailable, nil } +func (m *mockAutoDetect) CertManagerAvailability(ctx context.Context) (certmanager.Availability, error) { + if m.CertManagerAvailabilityFunc != nil { + return m.CertManagerAvailabilityFunc(ctx) + } + return certmanager.NotAvailable, nil +} + func TestMain(m *testing.M) { ctx, cancel = context.WithCancel(context.TODO()) defer cancel() @@ -191,6 +199,11 @@ func TestMain(m *testing.M) { os.Exit(1) } + if err = v1alpha1.SetupTargetAllocatorWebhook(mgr, config.New(), reviewer); err != nil { + fmt.Printf("failed to SetupWebhookWithManager: %v", err) + os.Exit(1) + } + if err = v1alpha1.SetupOpAMPBridgeWebhook(mgr, config.New()); err != nil { fmt.Printf("failed to SetupWebhookWithManager: %v", err) os.Exit(1) diff --git a/controllers/targetallocator_controller.go b/controllers/targetallocator_controller.go index 6b748e4535..5ec135ac68 100644 --- a/controllers/targetallocator_controller.go +++ b/controllers/targetallocator_controller.go @@ -17,6 +17,8 @@ package controllers import ( "context" + "fmt" + "slices" "github.com/go-logr/logr" monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" @@ -24,16 +26,23 @@ import ( corev1 "k8s.io/api/core/v1" policyV1 "k8s.io/api/policy/v1" apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/types" "k8s.io/client-go/tools/record" ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/builder" "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/predicate" + "sigs.k8s.io/controller-runtime/pkg/reconcile" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator" taStatus "github.com/open-telemetry/opentelemetry-operator/internal/status/targetallocator" + "github.com/open-telemetry/opentelemetry-operator/pkg/constants" "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) @@ -55,7 +64,11 @@ type TargetAllocatorReconcilerParams struct { Config config.Config } -func (r *TargetAllocatorReconciler) getParams(instance v1alpha1.TargetAllocator) targetallocator.Params { +func (r *TargetAllocatorReconciler) getParams(ctx context.Context, instance v1alpha1.TargetAllocator) (targetallocator.Params, error) { + collector, err := r.getCollector(ctx, instance) + if err != nil { + return targetallocator.Params{}, err + } p := targetallocator.Params{ Config: r.config, Client: r.Client, @@ -63,9 +76,47 @@ func (r *TargetAllocatorReconciler) getParams(instance v1alpha1.TargetAllocator) Scheme: r.scheme, Recorder: r.recorder, TargetAllocator: instance, + Collector: collector, } - return p + return p, nil +} + +func (r *TargetAllocatorReconciler) getCollector(ctx context.Context, instance v1alpha1.TargetAllocator) (*v1beta1.OpenTelemetryCollector, error) { + var collector v1beta1.OpenTelemetryCollector + ownerReferences := instance.GetOwnerReferences() + collectorIndex := slices.IndexFunc(ownerReferences, func(reference metav1.OwnerReference) bool { + return reference.Kind == "OpenTelemetryCollector" + }) + if collectorIndex != -1 { + collectorRef := ownerReferences[collectorIndex] + collectorKey := client.ObjectKey{Name: collectorRef.Name, Namespace: instance.GetNamespace()} + if err := r.Get(ctx, collectorKey, &collector); err != nil { + return nil, fmt.Errorf( + "error getting owner for TargetAllocator %s/%s: %w", + instance.GetNamespace(), instance.GetName(), err) + } + return &collector, nil + } + + var collectors v1beta1.OpenTelemetryCollectorList + listOpts := []client.ListOption{ + client.InNamespace(instance.GetNamespace()), + client.MatchingLabels{ + constants.LabelTargetAllocator: instance.GetName(), + }, + } + err := r.List(ctx, &collectors, listOpts...) + if err != nil { + return nil, err + } + if len(collectors.Items) == 0 { + return nil, nil + } else if len(collectors.Items) > 1 { + return nil, fmt.Errorf("found multiple OpenTelemetry collectors annotated with the same Target Allocator: %s/%s", instance.GetNamespace(), instance.GetName()) + } + + return &collectors.Items[0], nil } // NewTargetAllocatorReconciler creates a new reconciler for TargetAllocator objects. @@ -85,15 +136,14 @@ func NewTargetAllocatorReconciler( } } -// TODO: Uncomment the lines below after enabling the TA controller in main.go -// // +kubebuilder:rbac:groups="",resources=pods;configmaps;services;serviceaccounts;persistentvolumeclaims;persistentvolumes,verbs=get;list;watch;create;update;patch;delete -// // +kubebuilder:rbac:groups="",resources=events,verbs=create;patch -// // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete -// // +kubebuilder:rbac:groups=policy,resources=poddisruptionbudgets,verbs=get;list;watch;create;update;patch;delete -// // +kubebuilder:rbac:groups=monitoring.coreos.com,resources=servicemonitors;podmonitors,verbs=get;list;watch;create;update;patch;delete -// // +kubebuilder:rbac:groups=opentelemetry.io,resources=opentelemetrycollectors,verbs=get;list;watch;update;patch -// // +kubebuilder:rbac:groups=opentelemetry.io,resources=targetallocators,verbs=get;list;watch;update;patch -// // +kubebuilder:rbac:groups=opentelemetry.io,resources=targetallocators/status,verbs=get;update;patch +// +kubebuilder:rbac:groups="",resources=pods;configmaps;services;serviceaccounts,verbs=get;list;watch;create;update;patch;delete +// +kubebuilder:rbac:groups="",resources=events,verbs=create;patch +// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete +// +kubebuilder:rbac:groups=policy,resources=poddisruptionbudgets,verbs=get;list;watch;create;update;patch;delete +// +kubebuilder:rbac:groups=monitoring.coreos.com,resources=servicemonitors;podmonitors,verbs=get;list;watch;create;update;patch;delete +// +kubebuilder:rbac:groups=opentelemetry.io,resources=opentelemetrycollectors,verbs=get;list;watch;update;patch +// +kubebuilder:rbac:groups=opentelemetry.io,resources=targetallocators,verbs=get;list;watch;update;patch +// +kubebuilder:rbac:groups=opentelemetry.io,resources=targetallocators/status,verbs=get;update;patch // Reconcile the current state of a TargetAllocator resource with the desired state. func (r *TargetAllocatorReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { @@ -121,32 +171,91 @@ func (r *TargetAllocatorReconciler) Reconcile(ctx context.Context, req ctrl.Requ return ctrl.Result{}, nil } - params := r.getParams(instance) + params, err := r.getParams(ctx, instance) + if err != nil { + return ctrl.Result{}, err + } desiredObjects, buildErr := BuildTargetAllocator(params) if buildErr != nil { return ctrl.Result{}, buildErr } - err := reconcileDesiredObjects(ctx, r.Client, log, ¶ms.TargetAllocator, params.Scheme, desiredObjects, nil) + err = reconcileDesiredObjects(ctx, r.Client, log, ¶ms.TargetAllocator, params.Scheme, desiredObjects, nil) return taStatus.HandleReconcileStatus(ctx, log, params, err) } // SetupWithManager tells the manager what our controller is interested in. func (r *TargetAllocatorReconciler) SetupWithManager(mgr ctrl.Manager) error { - builder := ctrl.NewControllerManagedBy(mgr). + ctrlBuilder := ctrl.NewControllerManagedBy(mgr). For(&v1alpha1.TargetAllocator{}). Owns(&corev1.ConfigMap{}). Owns(&corev1.ServiceAccount{}). Owns(&corev1.Service{}). Owns(&appsv1.Deployment{}). - Owns(&corev1.PersistentVolume{}). - Owns(&corev1.PersistentVolumeClaim{}). Owns(&policyV1.PodDisruptionBudget{}) if featuregate.PrometheusOperatorIsAvailable.IsEnabled() { - builder.Owns(&monitoringv1.ServiceMonitor{}) - builder.Owns(&monitoringv1.PodMonitor{}) + ctrlBuilder.Owns(&monitoringv1.ServiceMonitor{}) + ctrlBuilder.Owns(&monitoringv1.PodMonitor{}) } - return builder.Complete(r) + // watch collectors which have embedded Target Allocator enabled + // we need to do this separately from collector reconciliation, as changes to Config will not lead to changes + // in the TargetAllocator CR + ctrlBuilder.Watches( + &v1beta1.OpenTelemetryCollector{}, + handler.EnqueueRequestsFromMapFunc(getTargetAllocatorForCollector), + builder.WithPredicates( + predicate.NewPredicateFuncs(func(object client.Object) bool { + collector := object.(*v1beta1.OpenTelemetryCollector) + return collector.Spec.TargetAllocator.Enabled + }), + ), + ) + + // watch collectors which have the target allocator label + collectorSelector := metav1.LabelSelector{ + MatchExpressions: []metav1.LabelSelectorRequirement{ + { + Key: constants.LabelTargetAllocator, + Operator: metav1.LabelSelectorOpExists, + }, + }, + } + selectorPredicate, err := predicate.LabelSelectorPredicate(collectorSelector) + if err != nil { + return err + } + ctrlBuilder.Watches( + &v1beta1.OpenTelemetryCollector{}, + handler.EnqueueRequestsFromMapFunc(getTargetAllocatorRequestsFromLabel), + builder.WithPredicates(selectorPredicate), + ) + + return ctrlBuilder.Complete(r) +} + +func getTargetAllocatorForCollector(_ context.Context, collector client.Object) []reconcile.Request { + return []reconcile.Request{ + { + NamespacedName: types.NamespacedName{ + Name: collector.GetName(), + Namespace: collector.GetNamespace(), + }, + }, + } +} + +func getTargetAllocatorRequestsFromLabel(_ context.Context, collector client.Object) []reconcile.Request { + if taName, ok := collector.GetLabels()[constants.LabelTargetAllocator]; ok { + return []reconcile.Request{ + { + NamespacedName: types.NamespacedName{ + Name: taName, + Namespace: collector.GetNamespace(), + }, + }, + } + } + return []reconcile.Request{} } diff --git a/controllers/targetallocator_reconciler_test.go b/controllers/targetallocator_reconciler_test.go new file mode 100644 index 0000000000..cd8a889765 --- /dev/null +++ b/controllers/targetallocator_reconciler_test.go @@ -0,0 +1,179 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package controllers + +import ( + "context" + "testing" + + routev1 "github.com/openshift/api/route/v1" + monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + networkingv1 "k8s.io/api/networking/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/types" + utilruntime "k8s.io/apimachinery/pkg/util/runtime" + "k8s.io/client-go/kubernetes/scheme" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + logf "sigs.k8s.io/controller-runtime/pkg/log" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" + "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/internal/config" + "github.com/open-telemetry/opentelemetry-operator/pkg/constants" +) + +var testLogger = logf.Log.WithName("opamp-bridge-controller-unit-tests") + +var ( + testScheme *runtime.Scheme = scheme.Scheme +) + +func init() { + utilruntime.Must(monitoringv1.AddToScheme(testScheme)) + utilruntime.Must(networkingv1.AddToScheme(testScheme)) + utilruntime.Must(routev1.AddToScheme(testScheme)) + utilruntime.Must(v1alpha1.AddToScheme(testScheme)) + utilruntime.Must(v1beta1.AddToScheme(testScheme)) +} + +func TestTargetAllocatorReconciler_GetCollector(t *testing.T) { + testCollector := &v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Labels: map[string]string{ + constants.LabelTargetAllocator: "label-ta", + }, + }, + } + fakeClient := fake.NewFakeClient(testCollector) + reconciler := NewTargetAllocatorReconciler( + fakeClient, + testScheme, + record.NewFakeRecorder(10), + config.New(), + testLogger, + ) + + t.Run("not owned by a collector", func(t *testing.T) { + ta := v1alpha1.TargetAllocator{} + collector, err := reconciler.getCollector(context.Background(), ta) + require.NoError(t, err) + assert.Nil(t, collector) + }) + t.Run("owned by a collector", func(t *testing.T) { + ta := v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + OwnerReferences: []metav1.OwnerReference{ + { + Kind: "OpenTelemetryCollector", + Name: testCollector.Name, + }, + }, + }, + } + collector, err := reconciler.getCollector(context.Background(), ta) + require.NoError(t, err) + assert.Equal(t, testCollector, collector) + }) + t.Run("owning collector doesn't exist", func(t *testing.T) { + ta := v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "default", + OwnerReferences: []metav1.OwnerReference{ + { + Kind: "OpenTelemetryCollector", + Name: "non_existent", + }, + }, + }, + } + collector, err := reconciler.getCollector(context.Background(), ta) + assert.Nil(t, collector) + assert.Errorf(t, err, "error getting owner for TargetAllocator default/test: opentelemetrycollectors.opentelemetry.io \"non_existent\" not found") + }) + t.Run("collector attached by label", func(t *testing.T) { + ta := v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "label-ta", + }, + } + collector, err := reconciler.getCollector(context.Background(), ta) + require.NoError(t, err) + assert.Equal(t, testCollector, collector) + }) + t.Run("multiple collectors attached by label", func(t *testing.T) { + testCollector2 := testCollector.DeepCopy() + testCollector2.SetName("test2") + fakeClient := fake.NewFakeClient(testCollector, testCollector2) + reconciler := NewTargetAllocatorReconciler( + fakeClient, + testScheme, + record.NewFakeRecorder(10), + config.New(), + testLogger, + ) + ta := v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "label-ta", + }, + } + collector, err := reconciler.getCollector(context.Background(), ta) + assert.Nil(t, collector) + assert.Errorf(t, err, "found multiple OpenTelemetry collectors annotated with the same Target Allocator: %s/%s", ta.Namespace, ta.Name) + }) +} + +func TestGetTargetAllocatorForCollector(t *testing.T) { + testCollector := &v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "default", + }, + } + requests := getTargetAllocatorForCollector(context.Background(), testCollector) + expected := []reconcile.Request{{ + NamespacedName: types.NamespacedName{ + Name: "test", + Namespace: "default", + }, + }} + assert.Equal(t, expected, requests) +} + +func TestGetTargetAllocatorRequestsFromLabel(t *testing.T) { + testCollector := &v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "default", + Labels: map[string]string{ + constants.LabelTargetAllocator: "label-ta", + }, + }, + } + requests := getTargetAllocatorRequestsFromLabel(context.Background(), testCollector) + expected := []reconcile.Request{{ + NamespacedName: types.NamespacedName{ + Name: "label-ta", + Namespace: "default", + }, + }} + assert.Equal(t, expected, requests) +} diff --git a/docs/api.md b/docs/api.md index 24d16da3f4..9601cca2fd 100644 --- a/docs/api.md +++ b/docs/api.md @@ -253,6 +253,14 @@ If the former var had been defined, then the other vars would be ignored.
Apache HTTPD server version. One of 2.4 or 2.2. Default is 2.4
false + + volumeClaimTemplate + object + + VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit
+ + false volumeLimitSize int or string @@ -894,43 +902,13 @@ only the result of this request.
-### Instrumentation.spec.defaults -[↩ Parent](#instrumentationspec) - - - -Defaults defines default values for the instrumentation. - - - - - - - - - - - - - - - - -
NameTypeDescriptionRequired
useLabelsForResourceAttributesboolean - UseLabelsForResourceAttributes defines whether to use common labels for resource attributes: - - `app.kubernetes.io/name` becomes `service.name` - - `app.kubernetes.io/version` becomes `service.version` - - `app.kubernetes.io/part-of` becomes `service.namespace` - - `app.kubernetes.io/instance` becomes `service.instance.id`
-
false
- - -### Instrumentation.spec.dotnet -[↩ Parent](#instrumentationspec) +### Instrumentation.spec.apacheHttpd.volumeClaimTemplate +[↩ Parent](#instrumentationspecapachehttpd) -DotNet defines configuration for DotNet auto-instrumentation. +VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit @@ -942,46 +920,37 @@ DotNet defines configuration for DotNet auto-instrumentation. - - - - - - - - - - - + - + - - + +
env[]object - Env defines DotNet specific env vars. There are four layers for env vars' definitions and -the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`. -If the former var had been defined, then the other vars would be ignored.
-
false
imagestring - Image is a container image with DotNet SDK and auto-instrumentation.
-
false
resourceRequirementsspec object - Resources describes the compute resource requirements.
+ The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here.
falsetrue
volumeLimitSizeint or stringmetadataobject - VolumeSizeLimit defines size limit for volume used for auto-instrumentation. -The default size is 200Mi.
+ May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation.
false
-### Instrumentation.spec.dotnet.env[index] -[↩ Parent](#instrumentationspecdotnet) +### Instrumentation.spec.apacheHttpd.volumeClaimTemplate.spec +[↩ Parent](#instrumentationspecapachehttpdvolumeclaimtemplate) -EnvVar represents an environment variable present in a Container. +The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here. @@ -993,177 +962,125 @@ EnvVar represents an environment variable present in a Container. - - - - - - - + + - + - - -
namestring - Name of the environment variable. Must be a C_IDENTIFIER.
-
true
valuestringaccessModes[]string - Variable references $(VAR_NAME) are expanded -using the previously defined environment variables in the container and -any service environment variables. If a variable cannot be resolved, -the reference in the input string will be unchanged. Double $$ are reduced -to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. -"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". -Escaped references will never be expanded, regardless of whether the variable -exists or not. -Defaults to "".
+ accessModes contains the desired access modes the volume should have. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
false
valueFromdataSource object - Source for the environment variable's value. Cannot be used if value is not empty.
-
false
- - -### Instrumentation.spec.dotnet.env[index].valueFrom -[↩ Parent](#instrumentationspecdotnetenvindex) - - - -Source for the environment variable's value. Cannot be used if value is not empty. - - - - - - - - - - - - - - - + - + - + - -
NameTypeDescriptionRequired
configMapKeyRefobject - Selects a key of a ConfigMap.
+ dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource.
false
fieldRefdataSourceRef object - Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, -spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
+ dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects.
false
resourceFieldRefresources object - Selects a resource of the container: only resources limits and requests -(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
+ resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
false
secretKeyRefselector object - Selects a key of a secret in the pod's namespace
+ selector is a label query over volumes to consider for binding.
false
- - -### Instrumentation.spec.dotnet.env[index].valueFrom.configMapKeyRef -[↩ Parent](#instrumentationspecdotnetenvindexvaluefrom) - - - -Selects a key of a ConfigMap. - - - - - - - - - - - - - - - - + - - + + - -
NameTypeDescriptionRequired
keystring - The key to select.
-
true
namestorageClassName string - Name of the referent. -This field is effectively required, but due to backwards compatibility is -allowed to be empty. Instances of this type with an empty value here are -almost certainly wrong. -More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
-
- Default:
+ storageClassName is the name of the StorageClass required by the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
false
optionalbooleanvolumeAttributesClassNamestring - Specify whether the ConfigMap or its key must be defined
+ volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. +If specified, the CSI driver will create or update the volume with the attributes defined +in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, +it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass +will be applied to the claim but it's not allowed to reset this field to empty string once it is set. +If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass +will be set by the persistentvolume controller if it exists. +If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be +set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource +exists. +More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ +(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).
false
- - -### Instrumentation.spec.dotnet.env[index].valueFrom.fieldRef -[↩ Parent](#instrumentationspecdotnetenvindexvaluefrom) - - - -Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, -spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. - - - - - - - - - - - - + + - + - +
NameTypeDescriptionRequired
fieldPath
volumeMode string - Path of the field to select in the specified API version.
+ volumeMode defines what type of volume is required by the claim. +Value of Filesystem is implied when not included in claim spec.
truefalse
apiVersionvolumeName string - Version of the schema the FieldPath is written in terms of, defaults to "v1".
+ volumeName is the binding reference to the PersistentVolume backing this claim.
false
-### Instrumentation.spec.dotnet.env[index].valueFrom.resourceFieldRef -[↩ Parent](#instrumentationspecdotnetenvindexvaluefrom) +### Instrumentation.spec.apacheHttpd.volumeClaimTemplate.spec.dataSource +[↩ Parent](#instrumentationspecapachehttpdvolumeclaimtemplatespec) -Selects a resource of the container: only resources limits and requests -(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. +dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource. @@ -1175,36 +1092,53 @@ Selects a resource of the container: only resources limits and requests - + - + - + - - + +
resourcekind string - Required: resource to select
+ Kind is the type of resource being referenced
true
containerNamename string - Container name: required for volumes, optional for env vars
+ Name is the name of resource being referenced
falsetrue
divisorint or stringapiGroupstring - Specifies the output format of the exposed resources, defaults to "1"
+ APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
false
-### Instrumentation.spec.dotnet.env[index].valueFrom.secretKeyRef -[↩ Parent](#instrumentationspecdotnetenvindexvaluefrom) +### Instrumentation.spec.apacheHttpd.volumeClaimTemplate.spec.dataSourceRef +[↩ Parent](#instrumentationspecapachehttpdvolumeclaimtemplatespec) -Selects a key of a secret in the pod's namespace +dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. @@ -1216,42 +1150,51 @@ Selects a key of a secret in the pod's namespace - + + + + + + - - + +
keykind string - The key of the secret to select from. Must be a valid secret key.
+ Kind is the type of resource being referenced
true
name string - Name of the referent. -This field is effectively required, but due to backwards compatibility is -allowed to be empty. Instances of this type with an empty value here are -almost certainly wrong. -More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
-
- Default:
+ Name is the name of resource being referenced
+
true
apiGroupstring + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
false
optionalbooleannamespacestring - Specify whether the Secret or its key must be defined
+ Namespace is the namespace of resource being referenced +Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. +(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
false
-### Instrumentation.spec.dotnet.resourceRequirements -[↩ Parent](#instrumentationspecdotnet) +### Instrumentation.spec.apacheHttpd.volumeClaimTemplate.spec.resources +[↩ Parent](#instrumentationspecapachehttpdvolumeclaimtemplatespec) -Resources describes the compute resource requirements. +resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources @@ -1263,19 +1206,6 @@ Resources describes the compute resource requirements. - - - - -
claims[]object - Claims lists the names of resources, defined in spec.resourceClaims, -that are used by this container. - -This is an alpha field and requires enabling the -DynamicResourceAllocation feature gate. - -This field is immutable. It can only be set for containers.
-
false
limits map[string]int or string @@ -1297,12 +1227,12 @@ More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-co
-### Instrumentation.spec.dotnet.resourceRequirements.claims[index] -[↩ Parent](#instrumentationspecdotnetresourcerequirements) +### Instrumentation.spec.apacheHttpd.volumeClaimTemplate.spec.selector +[↩ Parent](#instrumentationspecapachehttpdvolumeclaimtemplatespec) -ResourceClaim references one entry in PodSpec.ResourceClaims. +selector is a label query over volumes to consider for binding. @@ -1314,32 +1244,223 @@ ResourceClaim references one entry in PodSpec.ResourceClaims. - + + + + + + + + + + +
namematchExpressions[]object + matchExpressions is a list of label selector requirements. The requirements are ANDed.
+
false
matchLabelsmap[string]string + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed.
+
false
+ + +### Instrumentation.spec.apacheHttpd.volumeClaimTemplate.spec.selector.matchExpressions[index] +[↩ Parent](#instrumentationspecapachehttpdvolumeclaimtemplatespecselector) + + + +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. + + + + + + + + + + + + - + + + + + +
NameTypeDescriptionRequired
key string - Name must match the name of one entry in pod.spec.resourceClaims of -the Pod where this field is used. It makes that resource available -inside a container.
+ key is the label key that the selector applies to.
true
requestoperator string - Request is the name chosen for a request in the referenced claim. -If empty, everything from the claim is made available, otherwise -only the result of this request.
+ operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist.
+
true
values[]string + values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch.
false
-### Instrumentation.spec.env[index] +### Instrumentation.spec.apacheHttpd.volumeClaimTemplate.metadata +[↩ Parent](#instrumentationspecapachehttpdvolumeclaimtemplate) + + + +May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
annotationsmap[string]string +
+
false
finalizers[]string +
+
false
labelsmap[string]string +
+
false
namestring +
+
false
namespacestring +
+
false
+ + +### Instrumentation.spec.defaults +[↩ Parent](#instrumentationspec) + + + +Defaults defines default values for the instrumentation. + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
useLabelsForResourceAttributesboolean + UseLabelsForResourceAttributes defines whether to use common labels for resource attributes: + - `app.kubernetes.io/name` becomes `service.name` + - `app.kubernetes.io/version` becomes `service.version` + - `app.kubernetes.io/part-of` becomes `service.namespace` + - `app.kubernetes.io/instance` becomes `service.instance.id`
+
false
+ + +### Instrumentation.spec.dotnet [↩ Parent](#instrumentationspec) +DotNet defines configuration for DotNet auto-instrumentation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
env[]object + Env defines DotNet specific env vars. There are four layers for env vars' definitions and +the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`. +If the former var had been defined, then the other vars would be ignored.
+
false
imagestring + Image is a container image with DotNet SDK and auto-instrumentation.
+
false
resourceRequirementsobject + Resources describes the compute resource requirements.
+
false
volumeClaimTemplateobject + VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit
+
false
volumeLimitSizeint or string + VolumeSizeLimit defines size limit for volume used for auto-instrumentation. +The default size is 200Mi.
+
false
+ + +### Instrumentation.spec.dotnet.env[index] +[↩ Parent](#instrumentationspecdotnet) + + + EnvVar represents an environment variable present in a Container. @@ -1374,7 +1495,7 @@ Defaults to "".
- +
false
valueFromvalueFrom object Source for the environment variable's value. Cannot be used if value is not empty.
@@ -1384,8 +1505,8 @@ Defaults to "".
-### Instrumentation.spec.env[index].valueFrom -[↩ Parent](#instrumentationspecenvindex) +### Instrumentation.spec.dotnet.env[index].valueFrom +[↩ Parent](#instrumentationspecdotnetenvindex) @@ -1401,14 +1522,14 @@ Source for the environment variable's value. Cannot be used if value is not empt - configMapKeyRef + configMapKeyRef object Selects a key of a ConfigMap.
false - fieldRef + fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, @@ -1416,7 +1537,7 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI false - resourceFieldRef + resourceFieldRef object Selects a resource of the container: only resources limits and requests @@ -1424,7 +1545,7 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI false - secretKeyRef + secretKeyRef object Selects a key of a secret in the pod's namespace
@@ -1434,8 +1555,8 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI -### Instrumentation.spec.env[index].valueFrom.configMapKeyRef -[↩ Parent](#instrumentationspecenvindexvaluefrom) +### Instrumentation.spec.dotnet.env[index].valueFrom.configMapKeyRef +[↩ Parent](#instrumentationspecdotnetenvindexvaluefrom) @@ -1481,8 +1602,8 @@ More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/nam -### Instrumentation.spec.env[index].valueFrom.fieldRef -[↩ Parent](#instrumentationspecenvindexvaluefrom) +### Instrumentation.spec.dotnet.env[index].valueFrom.fieldRef +[↩ Parent](#instrumentationspecdotnetenvindexvaluefrom) @@ -1516,8 +1637,8 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI -### Instrumentation.spec.env[index].valueFrom.resourceFieldRef -[↩ Parent](#instrumentationspecenvindexvaluefrom) +### Instrumentation.spec.dotnet.env[index].valueFrom.resourceFieldRef +[↩ Parent](#instrumentationspecdotnetenvindexvaluefrom) @@ -1558,8 +1679,8 @@ Selects a resource of the container: only resources limits and requests -### Instrumentation.spec.env[index].valueFrom.secretKeyRef -[↩ Parent](#instrumentationspecenvindexvaluefrom) +### Instrumentation.spec.dotnet.env[index].valueFrom.secretKeyRef +[↩ Parent](#instrumentationspecdotnetenvindexvaluefrom) @@ -1605,12 +1726,12 @@ More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/nam -### Instrumentation.spec.exporter -[↩ Parent](#instrumentationspec) +### Instrumentation.spec.dotnet.resourceRequirements +[↩ Parent](#instrumentationspecdotnet) -Exporter defines exporter configuration. +Resources describes the compute resource requirements. @@ -1622,25 +1743,85 @@ Exporter defines exporter configuration. - + + + + + + + + + + + + + + + +
endpointclaims[]object + Claims lists the names of resources, defined in spec.resourceClaims, +that are used by this container. + +This is an alpha field and requires enabling the +DynamicResourceAllocation feature gate. + +This field is immutable. It can only be set for containers.
+
false
limitsmap[string]int or string + Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
requestsmap[string]int or string + Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
+ + +### Instrumentation.spec.dotnet.resourceRequirements.claims[index] +[↩ Parent](#instrumentationspecdotnetresourcerequirements) + + + +ResourceClaim references one entry in PodSpec.ResourceClaims. + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
namestring + Name must match the name of one entry in pod.spec.resourceClaims of +the Pod where this field is used. It makes that resource available +inside a container.
+
true
request string - Endpoint is address of the collector with OTLP endpoint.
+ Request is the name chosen for a request in the referenced claim. +If empty, everything from the claim is made available, otherwise +only the result of this request.
false
-### Instrumentation.spec.go -[↩ Parent](#instrumentationspec) +### Instrumentation.spec.dotnet.volumeClaimTemplate +[↩ Parent](#instrumentationspecdotnet) -Go defines configuration for Go auto-instrumentation. -When using Go auto-instrumentation you must provide a value for the OTEL_GO_AUTO_TARGET_EXE env var via the -Instrumentation env vars or via the instrumentation.opentelemetry.io/otel-go-auto-target-exe pod annotation. -Failure to set this value causes instrumentation injection to abort, leaving the original pod unchanged. +VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit @@ -1652,42 +1833,2979 @@ Failure to set this value causes instrumentation injection to abort, leaving the - - + + + + + + + + + +
env[]objectspecobject - Env defines Go specific env vars. There are four layers for env vars' definitions and -the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`. -If the former var had been defined, then the other vars would be ignored.
+ The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here.
+
true
metadataobject + May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation.
+
false
+ + +### Instrumentation.spec.dotnet.volumeClaimTemplate.spec +[↩ Parent](#instrumentationspecdotnetvolumeclaimtemplate) + + + +The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here. + + + + + + + + + + + + + + - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
accessModes[]string + accessModes contains the desired access modes the volume should have. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
false
imagedataSourceobject + dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource.
+
false
dataSourceRefobject + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects.
+
false
resourcesobject + resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+
false
selectorobject + selector is a label query over volumes to consider for binding.
+
false
storageClassName string - Image is a container image with Go SDK and auto-instrumentation.
+ storageClassName is the name of the StorageClass required by the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+
false
volumeAttributesClassNamestring + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. +If specified, the CSI driver will create or update the volume with the attributes defined +in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, +it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass +will be applied to the claim but it's not allowed to reset this field to empty string once it is set. +If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass +will be set by the persistentvolume controller if it exists. +If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be +set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource +exists. +More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ +(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).
+
false
volumeModestring + volumeMode defines what type of volume is required by the claim. +Value of Filesystem is implied when not included in claim spec.
+
false
volumeNamestring + volumeName is the binding reference to the PersistentVolume backing this claim.
+
false
+ + +### Instrumentation.spec.dotnet.volumeClaimTemplate.spec.dataSource +[↩ Parent](#instrumentationspecdotnetvolumeclaimtemplatespec) + + + +dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
kindstring + Kind is the type of resource being referenced
+
true
namestring + Name is the name of resource being referenced
+
true
apiGroupstring + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
+
false
+ + +### Instrumentation.spec.dotnet.volumeClaimTemplate.spec.dataSourceRef +[↩ Parent](#instrumentationspecdotnetvolumeclaimtemplatespec) + + + +dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
kindstring + Kind is the type of resource being referenced
+
true
namestring + Name is the name of resource being referenced
+
true
apiGroupstring + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
+
false
namespacestring + Namespace is the namespace of resource being referenced +Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. +(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
+
false
+ + +### Instrumentation.spec.dotnet.volumeClaimTemplate.spec.resources +[↩ Parent](#instrumentationspecdotnetvolumeclaimtemplatespec) + + + +resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
limitsmap[string]int or string + Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
requestsmap[string]int or string + Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
+ + +### Instrumentation.spec.dotnet.volumeClaimTemplate.spec.selector +[↩ Parent](#instrumentationspecdotnetvolumeclaimtemplatespec) + + + +selector is a label query over volumes to consider for binding. + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
matchExpressions[]object + matchExpressions is a list of label selector requirements. The requirements are ANDed.
+
false
matchLabelsmap[string]string + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed.
+
false
+ + +### Instrumentation.spec.dotnet.volumeClaimTemplate.spec.selector.matchExpressions[index] +[↩ Parent](#instrumentationspecdotnetvolumeclaimtemplatespecselector) + + + +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
keystring + key is the label key that the selector applies to.
+
true
operatorstring + operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist.
+
true
values[]string + values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch.
+
false
+ + +### Instrumentation.spec.dotnet.volumeClaimTemplate.metadata +[↩ Parent](#instrumentationspecdotnetvolumeclaimtemplate) + + + +May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
annotationsmap[string]string +
+
false
finalizers[]string +
+
false
labelsmap[string]string +
+
false
namestring +
+
false
namespacestring +
+
false
+ + +### Instrumentation.spec.env[index] +[↩ Parent](#instrumentationspec) + + + +EnvVar represents an environment variable present in a Container. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
namestring + Name of the environment variable. Must be a C_IDENTIFIER.
+
true
valuestring + Variable references $(VAR_NAME) are expanded +using the previously defined environment variables in the container and +any service environment variables. If a variable cannot be resolved, +the reference in the input string will be unchanged. Double $$ are reduced +to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. +"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". +Escaped references will never be expanded, regardless of whether the variable +exists or not. +Defaults to "".
+
false
valueFromobject + Source for the environment variable's value. Cannot be used if value is not empty.
+
false
+ + +### Instrumentation.spec.env[index].valueFrom +[↩ Parent](#instrumentationspecenvindex) + + + +Source for the environment variable's value. Cannot be used if value is not empty. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
configMapKeyRefobject + Selects a key of a ConfigMap.
+
false
fieldRefobject + Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, +spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
+
false
resourceFieldRefobject + Selects a resource of the container: only resources limits and requests +(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
+
false
secretKeyRefobject + Selects a key of a secret in the pod's namespace
+
false
+ + +### Instrumentation.spec.env[index].valueFrom.configMapKeyRef +[↩ Parent](#instrumentationspecenvindexvaluefrom) + + + +Selects a key of a ConfigMap. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
keystring + The key to select.
+
true
namestring + Name of the referent. +This field is effectively required, but due to backwards compatibility is +allowed to be empty. Instances of this type with an empty value here are +almost certainly wrong. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ Default:
+
false
optionalboolean + Specify whether the ConfigMap or its key must be defined
+
false
+ + +### Instrumentation.spec.env[index].valueFrom.fieldRef +[↩ Parent](#instrumentationspecenvindexvaluefrom) + + + +Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, +spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
fieldPathstring + Path of the field to select in the specified API version.
+
true
apiVersionstring + Version of the schema the FieldPath is written in terms of, defaults to "v1".
+
false
+ + +### Instrumentation.spec.env[index].valueFrom.resourceFieldRef +[↩ Parent](#instrumentationspecenvindexvaluefrom) + + + +Selects a resource of the container: only resources limits and requests +(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
resourcestring + Required: resource to select
+
true
containerNamestring + Container name: required for volumes, optional for env vars
+
false
divisorint or string + Specifies the output format of the exposed resources, defaults to "1"
+
false
+ + +### Instrumentation.spec.env[index].valueFrom.secretKeyRef +[↩ Parent](#instrumentationspecenvindexvaluefrom) + + + +Selects a key of a secret in the pod's namespace + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
keystring + The key of the secret to select from. Must be a valid secret key.
+
true
namestring + Name of the referent. +This field is effectively required, but due to backwards compatibility is +allowed to be empty. Instances of this type with an empty value here are +almost certainly wrong. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ Default:
+
false
optionalboolean + Specify whether the Secret or its key must be defined
+
false
+ + +### Instrumentation.spec.exporter +[↩ Parent](#instrumentationspec) + + + +Exporter defines exporter configuration. + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
endpointstring + Endpoint is address of the collector with OTLP endpoint. +If the endpoint defines https:// scheme TLS has to be specified.
+
false
tlsobject + TLS defines certificates for TLS. +TLS needs to be enabled by specifying https:// scheme in the Endpoint.
+
false
+ + +### Instrumentation.spec.exporter.tls +[↩ Parent](#instrumentationspecexporter) + + + +TLS defines certificates for TLS. +TLS needs to be enabled by specifying https:// scheme in the Endpoint. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
ca_filestring + CA defines the key of certificate (e.g. ca.crt) in the configmap map, secret or absolute path to a certificate. +The absolute path can be used when certificate is already present on the workload filesystem e.g. +/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
+
false
cert_filestring + Cert defines the key (e.g. tls.crt) of the client certificate in the secret or absolute path to a certificate. +The absolute path can be used when certificate is already present on the workload filesystem.
+
false
configMapNamestring + ConfigMapName defines configmap name with CA certificate. If it is not defined CA certificate will be +used from the secret defined in SecretName.
+
false
key_filestring + Key defines a key (e.g. tls.key) of the private key in the secret or absolute path to a certificate. +The absolute path can be used when certificate is already present on the workload filesystem.
+
false
secretNamestring + SecretName defines secret name that will be used to configure TLS on the exporter. +It is user responsibility to create the secret in the namespace of the workload. +The secret must contain client certificate (Cert) and private key (Key). +The CA certificate might be defined in the secret or in the config map.
+
false
+ + +### Instrumentation.spec.go +[↩ Parent](#instrumentationspec) + + + +Go defines configuration for Go auto-instrumentation. +When using Go auto-instrumentation you must provide a value for the OTEL_GO_AUTO_TARGET_EXE env var via the +Instrumentation env vars or via the instrumentation.opentelemetry.io/otel-go-auto-target-exe pod annotation. +Failure to set this value causes instrumentation injection to abort, leaving the original pod unchanged. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
env[]object + Env defines Go specific env vars. There are four layers for env vars' definitions and +the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`. +If the former var had been defined, then the other vars would be ignored.
+
false
imagestring + Image is a container image with Go SDK and auto-instrumentation.
+
false
resourceRequirementsobject + Resources describes the compute resource requirements.
+
false
volumeClaimTemplateobject + VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit
+
false
volumeLimitSizeint or string + VolumeSizeLimit defines size limit for volume used for auto-instrumentation. +The default size is 200Mi.
+
false
+ + +### Instrumentation.spec.go.env[index] +[↩ Parent](#instrumentationspecgo) + + + +EnvVar represents an environment variable present in a Container. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
namestring + Name of the environment variable. Must be a C_IDENTIFIER.
+
true
valuestring + Variable references $(VAR_NAME) are expanded +using the previously defined environment variables in the container and +any service environment variables. If a variable cannot be resolved, +the reference in the input string will be unchanged. Double $$ are reduced +to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. +"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". +Escaped references will never be expanded, regardless of whether the variable +exists or not. +Defaults to "".
+
false
valueFromobject + Source for the environment variable's value. Cannot be used if value is not empty.
+
false
+ + +### Instrumentation.spec.go.env[index].valueFrom +[↩ Parent](#instrumentationspecgoenvindex) + + + +Source for the environment variable's value. Cannot be used if value is not empty. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
configMapKeyRefobject + Selects a key of a ConfigMap.
+
false
fieldRefobject + Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, +spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
+
false
resourceFieldRefobject + Selects a resource of the container: only resources limits and requests +(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
+
false
secretKeyRefobject + Selects a key of a secret in the pod's namespace
+
false
+ + +### Instrumentation.spec.go.env[index].valueFrom.configMapKeyRef +[↩ Parent](#instrumentationspecgoenvindexvaluefrom) + + + +Selects a key of a ConfigMap. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
keystring + The key to select.
+
true
namestring + Name of the referent. +This field is effectively required, but due to backwards compatibility is +allowed to be empty. Instances of this type with an empty value here are +almost certainly wrong. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ Default:
+
false
optionalboolean + Specify whether the ConfigMap or its key must be defined
+
false
+ + +### Instrumentation.spec.go.env[index].valueFrom.fieldRef +[↩ Parent](#instrumentationspecgoenvindexvaluefrom) + + + +Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, +spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
fieldPathstring + Path of the field to select in the specified API version.
+
true
apiVersionstring + Version of the schema the FieldPath is written in terms of, defaults to "v1".
+
false
+ + +### Instrumentation.spec.go.env[index].valueFrom.resourceFieldRef +[↩ Parent](#instrumentationspecgoenvindexvaluefrom) + + + +Selects a resource of the container: only resources limits and requests +(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
resourcestring + Required: resource to select
+
true
containerNamestring + Container name: required for volumes, optional for env vars
+
false
divisorint or string + Specifies the output format of the exposed resources, defaults to "1"
+
false
+ + +### Instrumentation.spec.go.env[index].valueFrom.secretKeyRef +[↩ Parent](#instrumentationspecgoenvindexvaluefrom) + + + +Selects a key of a secret in the pod's namespace + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
keystring + The key of the secret to select from. Must be a valid secret key.
+
true
namestring + Name of the referent. +This field is effectively required, but due to backwards compatibility is +allowed to be empty. Instances of this type with an empty value here are +almost certainly wrong. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ Default:
+
false
optionalboolean + Specify whether the Secret or its key must be defined
+
false
+ + +### Instrumentation.spec.go.resourceRequirements +[↩ Parent](#instrumentationspecgo) + + + +Resources describes the compute resource requirements. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
claims[]object + Claims lists the names of resources, defined in spec.resourceClaims, +that are used by this container. + +This is an alpha field and requires enabling the +DynamicResourceAllocation feature gate. + +This field is immutable. It can only be set for containers.
+
false
limitsmap[string]int or string + Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
requestsmap[string]int or string + Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
+ + +### Instrumentation.spec.go.resourceRequirements.claims[index] +[↩ Parent](#instrumentationspecgoresourcerequirements) + + + +ResourceClaim references one entry in PodSpec.ResourceClaims. + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
namestring + Name must match the name of one entry in pod.spec.resourceClaims of +the Pod where this field is used. It makes that resource available +inside a container.
+
true
requeststring + Request is the name chosen for a request in the referenced claim. +If empty, everything from the claim is made available, otherwise +only the result of this request.
+
false
+ + +### Instrumentation.spec.go.volumeClaimTemplate +[↩ Parent](#instrumentationspecgo) + + + +VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
specobject + The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here.
+
true
metadataobject + May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation.
+
false
+ + +### Instrumentation.spec.go.volumeClaimTemplate.spec +[↩ Parent](#instrumentationspecgovolumeclaimtemplate) + + + +The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
accessModes[]string + accessModes contains the desired access modes the volume should have. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+
false
dataSourceobject + dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource.
+
false
dataSourceRefobject + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects.
+
false
resourcesobject + resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+
false
selectorobject + selector is a label query over volumes to consider for binding.
+
false
storageClassNamestring + storageClassName is the name of the StorageClass required by the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+
false
volumeAttributesClassNamestring + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. +If specified, the CSI driver will create or update the volume with the attributes defined +in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, +it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass +will be applied to the claim but it's not allowed to reset this field to empty string once it is set. +If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass +will be set by the persistentvolume controller if it exists. +If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be +set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource +exists. +More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ +(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).
+
false
volumeModestring + volumeMode defines what type of volume is required by the claim. +Value of Filesystem is implied when not included in claim spec.
+
false
volumeNamestring + volumeName is the binding reference to the PersistentVolume backing this claim.
+
false
+ + +### Instrumentation.spec.go.volumeClaimTemplate.spec.dataSource +[↩ Parent](#instrumentationspecgovolumeclaimtemplatespec) + + + +dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
kindstring + Kind is the type of resource being referenced
+
true
namestring + Name is the name of resource being referenced
+
true
apiGroupstring + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
+
false
+ + +### Instrumentation.spec.go.volumeClaimTemplate.spec.dataSourceRef +[↩ Parent](#instrumentationspecgovolumeclaimtemplatespec) + + + +dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
kindstring + Kind is the type of resource being referenced
+
true
namestring + Name is the name of resource being referenced
+
true
apiGroupstring + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
+
false
namespacestring + Namespace is the namespace of resource being referenced +Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. +(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
+
false
+ + +### Instrumentation.spec.go.volumeClaimTemplate.spec.resources +[↩ Parent](#instrumentationspecgovolumeclaimtemplatespec) + + + +resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
limitsmap[string]int or string + Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
requestsmap[string]int or string + Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
+ + +### Instrumentation.spec.go.volumeClaimTemplate.spec.selector +[↩ Parent](#instrumentationspecgovolumeclaimtemplatespec) + + + +selector is a label query over volumes to consider for binding. + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
matchExpressions[]object + matchExpressions is a list of label selector requirements. The requirements are ANDed.
+
false
matchLabelsmap[string]string + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed.
+
false
+ + +### Instrumentation.spec.go.volumeClaimTemplate.spec.selector.matchExpressions[index] +[↩ Parent](#instrumentationspecgovolumeclaimtemplatespecselector) + + + +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
keystring + key is the label key that the selector applies to.
+
true
operatorstring + operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist.
+
true
values[]string + values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch.
+
false
+ + +### Instrumentation.spec.go.volumeClaimTemplate.metadata +[↩ Parent](#instrumentationspecgovolumeclaimtemplate) + + + +May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
annotationsmap[string]string +
+
false
finalizers[]string +
+
false
labelsmap[string]string +
+
false
namestring +
+
false
namespacestring +
+
false
+ + +### Instrumentation.spec.java +[↩ Parent](#instrumentationspec) + + + +Java defines configuration for java auto-instrumentation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
env[]object + Env defines java specific env vars. There are four layers for env vars' definitions and +the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`. +If the former var had been defined, then the other vars would be ignored.
+
false
extensions[]object + Extensions defines java specific extensions. +All extensions are copied to a single directory; if a JAR with the same name exists, it will be overwritten.
+
false
imagestring + Image is a container image with javaagent auto-instrumentation JAR.
+
false
resourcesobject + Resources describes the compute resource requirements.
+
false
volumeClaimTemplateobject + VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit
+
false
volumeLimitSizeint or string + VolumeSizeLimit defines size limit for volume used for auto-instrumentation. +The default size is 200Mi.
+
false
+ + +### Instrumentation.spec.java.env[index] +[↩ Parent](#instrumentationspecjava) + + + +EnvVar represents an environment variable present in a Container. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
namestring + Name of the environment variable. Must be a C_IDENTIFIER.
+
true
valuestring + Variable references $(VAR_NAME) are expanded +using the previously defined environment variables in the container and +any service environment variables. If a variable cannot be resolved, +the reference in the input string will be unchanged. Double $$ are reduced +to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. +"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". +Escaped references will never be expanded, regardless of whether the variable +exists or not. +Defaults to "".
+
false
valueFromobject + Source for the environment variable's value. Cannot be used if value is not empty.
+
false
+ + +### Instrumentation.spec.java.env[index].valueFrom +[↩ Parent](#instrumentationspecjavaenvindex) + + + +Source for the environment variable's value. Cannot be used if value is not empty. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
configMapKeyRefobject + Selects a key of a ConfigMap.
+
false
fieldRefobject + Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, +spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
+
false
resourceFieldRefobject + Selects a resource of the container: only resources limits and requests +(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
+
false
secretKeyRefobject + Selects a key of a secret in the pod's namespace
+
false
+ + +### Instrumentation.spec.java.env[index].valueFrom.configMapKeyRef +[↩ Parent](#instrumentationspecjavaenvindexvaluefrom) + + + +Selects a key of a ConfigMap. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
keystring + The key to select.
+
true
namestring + Name of the referent. +This field is effectively required, but due to backwards compatibility is +allowed to be empty. Instances of this type with an empty value here are +almost certainly wrong. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ Default:
+
false
optionalboolean + Specify whether the ConfigMap or its key must be defined
+
false
+ + +### Instrumentation.spec.java.env[index].valueFrom.fieldRef +[↩ Parent](#instrumentationspecjavaenvindexvaluefrom) + + + +Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, +spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
fieldPathstring + Path of the field to select in the specified API version.
+
true
apiVersionstring + Version of the schema the FieldPath is written in terms of, defaults to "v1".
+
false
+ + +### Instrumentation.spec.java.env[index].valueFrom.resourceFieldRef +[↩ Parent](#instrumentationspecjavaenvindexvaluefrom) + + + +Selects a resource of the container: only resources limits and requests +(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
resourcestring + Required: resource to select
+
true
containerNamestring + Container name: required for volumes, optional for env vars
+
false
divisorint or string + Specifies the output format of the exposed resources, defaults to "1"
+
false
+ + +### Instrumentation.spec.java.env[index].valueFrom.secretKeyRef +[↩ Parent](#instrumentationspecjavaenvindexvaluefrom) + + + +Selects a key of a secret in the pod's namespace + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
keystring + The key of the secret to select from. Must be a valid secret key.
+
true
namestring + Name of the referent. +This field is effectively required, but due to backwards compatibility is +allowed to be empty. Instances of this type with an empty value here are +almost certainly wrong. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ Default:
+
false
optionalboolean + Specify whether the Secret or its key must be defined
+
false
+ + +### Instrumentation.spec.java.extensions[index] +[↩ Parent](#instrumentationspecjava) + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
dirstring + Dir is a directory with extensions auto-instrumentation JAR.
+
true
imagestring + Image is a container image with extensions auto-instrumentation JAR.
+
true
+ + +### Instrumentation.spec.java.resources +[↩ Parent](#instrumentationspecjava) + + + +Resources describes the compute resource requirements. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
claims[]object + Claims lists the names of resources, defined in spec.resourceClaims, +that are used by this container. + +This is an alpha field and requires enabling the +DynamicResourceAllocation feature gate. + +This field is immutable. It can only be set for containers.
+
false
limitsmap[string]int or string + Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
requestsmap[string]int or string + Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
+ + +### Instrumentation.spec.java.resources.claims[index] +[↩ Parent](#instrumentationspecjavaresources) + + + +ResourceClaim references one entry in PodSpec.ResourceClaims. + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
namestring + Name must match the name of one entry in pod.spec.resourceClaims of +the Pod where this field is used. It makes that resource available +inside a container.
+
true
requeststring + Request is the name chosen for a request in the referenced claim. +If empty, everything from the claim is made available, otherwise +only the result of this request.
+
false
+ + +### Instrumentation.spec.java.volumeClaimTemplate +[↩ Parent](#instrumentationspecjava) + + + +VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
specobject + The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here.
+
true
metadataobject + May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation.
+
false
+ + +### Instrumentation.spec.java.volumeClaimTemplate.spec +[↩ Parent](#instrumentationspecjavavolumeclaimtemplate) + + + +The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
accessModes[]string + accessModes contains the desired access modes the volume should have. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+
false
dataSourceobject + dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource.
+
false
dataSourceRefobject + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects.
+
false
resourcesobject + resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+
false
selectorobject + selector is a label query over volumes to consider for binding.
+
false
storageClassNamestring + storageClassName is the name of the StorageClass required by the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+
false
volumeAttributesClassNamestring + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. +If specified, the CSI driver will create or update the volume with the attributes defined +in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, +it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass +will be applied to the claim but it's not allowed to reset this field to empty string once it is set. +If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass +will be set by the persistentvolume controller if it exists. +If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be +set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource +exists. +More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ +(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).
+
false
volumeModestring + volumeMode defines what type of volume is required by the claim. +Value of Filesystem is implied when not included in claim spec.
+
false
volumeNamestring + volumeName is the binding reference to the PersistentVolume backing this claim.
+
false
+ + +### Instrumentation.spec.java.volumeClaimTemplate.spec.dataSource +[↩ Parent](#instrumentationspecjavavolumeclaimtemplatespec) + + + +dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
kindstring + Kind is the type of resource being referenced
+
true
namestring + Name is the name of resource being referenced
+
true
apiGroupstring + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
+
false
+ + +### Instrumentation.spec.java.volumeClaimTemplate.spec.dataSourceRef +[↩ Parent](#instrumentationspecjavavolumeclaimtemplatespec) + + + +dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
kindstring + Kind is the type of resource being referenced
+
true
namestring + Name is the name of resource being referenced
+
true
apiGroupstring + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
+
false
namespacestring + Namespace is the namespace of resource being referenced +Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. +(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
+
false
+ + +### Instrumentation.spec.java.volumeClaimTemplate.spec.resources +[↩ Parent](#instrumentationspecjavavolumeclaimtemplatespec) + + + +resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
limitsmap[string]int or string + Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
requestsmap[string]int or string + Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
+ + +### Instrumentation.spec.java.volumeClaimTemplate.spec.selector +[↩ Parent](#instrumentationspecjavavolumeclaimtemplatespec) + + + +selector is a label query over volumes to consider for binding. + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
matchExpressions[]object + matchExpressions is a list of label selector requirements. The requirements are ANDed.
+
false
matchLabelsmap[string]string + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed.
+
false
+ + +### Instrumentation.spec.java.volumeClaimTemplate.spec.selector.matchExpressions[index] +[↩ Parent](#instrumentationspecjavavolumeclaimtemplatespecselector) + + + +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
keystring + key is the label key that the selector applies to.
+
true
operatorstring + operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist.
+
true
values[]string + values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch.
+
false
+ + +### Instrumentation.spec.java.volumeClaimTemplate.metadata +[↩ Parent](#instrumentationspecjavavolumeclaimtemplate) + + + +May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
annotationsmap[string]string +
+
false
finalizers[]string +
+
false
labelsmap[string]string +
+
false
namestring +
+
false
namespacestring +
+
false
+ + +### Instrumentation.spec.nginx +[↩ Parent](#instrumentationspec) + + + +Nginx defines configuration for Nginx auto-instrumentation. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
attrs[]object + Attrs defines Nginx agent specific attributes. The precedence order is: +`agent default attributes` > `instrument spec attributes` . +Attributes are documented at https://github.com/open-telemetry/opentelemetry-cpp-contrib/tree/main/instrumentation/otel-webserver-module
+
false
configFilestring + Location of Nginx configuration file. +Needed only if different from default "/etx/nginx/nginx.conf"
+
false
env[]object + Env defines Nginx specific env vars. There are four layers for env vars' definitions and +the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`. +If the former var had been defined, then the other vars would be ignored.
+
false
imagestring + Image is a container image with Nginx SDK and auto-instrumentation.
+
false
resourceRequirementsobject + Resources describes the compute resource requirements.
+
false
volumeClaimTemplateobject + VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit
+
false
volumeLimitSizeint or string + VolumeSizeLimit defines size limit for volume used for auto-instrumentation. +The default size is 200Mi.
+
false
+ + +### Instrumentation.spec.nginx.attrs[index] +[↩ Parent](#instrumentationspecnginx) + + + +EnvVar represents an environment variable present in a Container. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
namestring + Name of the environment variable. Must be a C_IDENTIFIER.
+
true
valuestring + Variable references $(VAR_NAME) are expanded +using the previously defined environment variables in the container and +any service environment variables. If a variable cannot be resolved, +the reference in the input string will be unchanged. Double $$ are reduced +to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. +"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". +Escaped references will never be expanded, regardless of whether the variable +exists or not. +Defaults to "".
+
false
valueFromobject + Source for the environment variable's value. Cannot be used if value is not empty.
+
false
+ + +### Instrumentation.spec.nginx.attrs[index].valueFrom +[↩ Parent](#instrumentationspecnginxattrsindex) + + + +Source for the environment variable's value. Cannot be used if value is not empty. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
configMapKeyRefobject + Selects a key of a ConfigMap.
+
false
fieldRefobject + Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, +spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
+
false
resourceFieldRefobject + Selects a resource of the container: only resources limits and requests +(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
+
false
secretKeyRefobject + Selects a key of a secret in the pod's namespace
+
false
+ + +### Instrumentation.spec.nginx.attrs[index].valueFrom.configMapKeyRef +[↩ Parent](#instrumentationspecnginxattrsindexvaluefrom) + + + +Selects a key of a ConfigMap. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
keystring + The key to select.
+
true
namestring + Name of the referent. +This field is effectively required, but due to backwards compatibility is +allowed to be empty. Instances of this type with an empty value here are +almost certainly wrong. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ Default:
+
false
optionalboolean + Specify whether the ConfigMap or its key must be defined
+
false
+ + +### Instrumentation.spec.nginx.attrs[index].valueFrom.fieldRef +[↩ Parent](#instrumentationspecnginxattrsindexvaluefrom) + + + +Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, +spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
fieldPathstring + Path of the field to select in the specified API version.
+
true
apiVersionstring + Version of the schema the FieldPath is written in terms of, defaults to "v1".
+
false
+ + +### Instrumentation.spec.nginx.attrs[index].valueFrom.resourceFieldRef +[↩ Parent](#instrumentationspecnginxattrsindexvaluefrom) + + + +Selects a resource of the container: only resources limits and requests +(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
resourcestring + Required: resource to select
+
true
containerNamestring + Container name: required for volumes, optional for env vars
+
false
divisorint or string + Specifies the output format of the exposed resources, defaults to "1"
+
false
+ + +### Instrumentation.spec.nginx.attrs[index].valueFrom.secretKeyRef +[↩ Parent](#instrumentationspecnginxattrsindexvaluefrom) + + + +Selects a key of a secret in the pod's namespace + + + + + + + + + + + + + + - + - - + + - - + +
NameTypeDescriptionRequired
keystring + The key of the secret to select from. Must be a valid secret key.
falsetrue
resourceRequirementsobjectnamestring - Resources describes the compute resource requirements.
+ Name of the referent. +This field is effectively required, but due to backwards compatibility is +allowed to be empty. Instances of this type with an empty value here are +almost certainly wrong. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ Default:
false
volumeLimitSizeint or stringoptionalboolean - VolumeSizeLimit defines size limit for volume used for auto-instrumentation. -The default size is 200Mi.
+ Specify whether the Secret or its key must be defined
false
-### Instrumentation.spec.go.env[index] -[↩ Parent](#instrumentationspecgo) +### Instrumentation.spec.nginx.env[index] +[↩ Parent](#instrumentationspecnginx) @@ -1725,7 +4843,7 @@ Defaults to "".
false - valueFrom + valueFrom object Source for the environment variable's value. Cannot be used if value is not empty.
@@ -1735,8 +4853,8 @@ Defaults to "".
-### Instrumentation.spec.go.env[index].valueFrom -[↩ Parent](#instrumentationspecgoenvindex) +### Instrumentation.spec.nginx.env[index].valueFrom +[↩ Parent](#instrumentationspecnginxenvindex) @@ -1752,14 +4870,14 @@ Source for the environment variable's value. Cannot be used if value is not empt - configMapKeyRef + configMapKeyRef object Selects a key of a ConfigMap.
false - fieldRef + fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, @@ -1767,7 +4885,7 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI false - resourceFieldRef + resourceFieldRef object Selects a resource of the container: only resources limits and requests @@ -1775,7 +4893,7 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI false - secretKeyRef + secretKeyRef object Selects a key of a secret in the pod's namespace
@@ -1785,8 +4903,8 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI -### Instrumentation.spec.go.env[index].valueFrom.configMapKeyRef -[↩ Parent](#instrumentationspecgoenvindexvaluefrom) +### Instrumentation.spec.nginx.env[index].valueFrom.configMapKeyRef +[↩ Parent](#instrumentationspecnginxenvindexvaluefrom) @@ -1832,8 +4950,8 @@ More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/nam -### Instrumentation.spec.go.env[index].valueFrom.fieldRef -[↩ Parent](#instrumentationspecgoenvindexvaluefrom) +### Instrumentation.spec.nginx.env[index].valueFrom.fieldRef +[↩ Parent](#instrumentationspecnginxenvindexvaluefrom) @@ -1867,8 +4985,8 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI -### Instrumentation.spec.go.env[index].valueFrom.resourceFieldRef -[↩ Parent](#instrumentationspecgoenvindexvaluefrom) +### Instrumentation.spec.nginx.env[index].valueFrom.resourceFieldRef +[↩ Parent](#instrumentationspecnginxenvindexvaluefrom) @@ -1909,8 +5027,8 @@ Selects a resource of the container: only resources limits and requests -### Instrumentation.spec.go.env[index].valueFrom.secretKeyRef -[↩ Parent](#instrumentationspecgoenvindexvaluefrom) +### Instrumentation.spec.nginx.env[index].valueFrom.secretKeyRef +[↩ Parent](#instrumentationspecnginxenvindexvaluefrom) @@ -1956,8 +5074,8 @@ More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/nam -### Instrumentation.spec.go.resourceRequirements -[↩ Parent](#instrumentationspecgo) +### Instrumentation.spec.nginx.resourceRequirements +[↩ Parent](#instrumentationspecnginx) @@ -1973,7 +5091,7 @@ Resources describes the compute resource requirements. - claims + claims []object Claims lists the names of resources, defined in spec.resourceClaims, @@ -2007,8 +5125,8 @@ More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-co -### Instrumentation.spec.go.resourceRequirements.claims[index] -[↩ Parent](#instrumentationspecgoresourcerequirements) +### Instrumentation.spec.nginx.resourceRequirements.claims[index] +[↩ Parent](#instrumentationspecnginxresourcerequirements) @@ -2045,12 +5163,13 @@ only the result of this request.
-### Instrumentation.spec.java -[↩ Parent](#instrumentationspec) +### Instrumentation.spec.nginx.volumeClaimTemplate +[↩ Parent](#instrumentationspecnginx) -Java defines configuration for java auto-instrumentation. +VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit @@ -2062,54 +5181,37 @@ Java defines configuration for java auto-instrumentation. - - - - - - - - - - - - - - - - + - + - - + +
env[]object - Env defines java specific env vars. There are four layers for env vars' definitions and -the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`. -If the former var had been defined, then the other vars would be ignored.
-
false
extensions[]object - Extensions defines java specific extensions. -All extensions are copied to a single directory; if a JAR with the same name exists, it will be overwritten.
-
false
imagestring - Image is a container image with javaagent auto-instrumentation JAR.
-
false
resourcesspec object - Resources describes the compute resource requirements.
+ The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here.
falsetrue
volumeLimitSizeint or stringmetadataobject - VolumeSizeLimit defines size limit for volume used for auto-instrumentation. -The default size is 200Mi.
+ May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation.
false
-### Instrumentation.spec.java.env[index] -[↩ Parent](#instrumentationspecjava) +### Instrumentation.spec.nginx.volumeClaimTemplate.spec +[↩ Parent](#instrumentationspecnginxvolumeclaimtemplate) -EnvVar represents an environment variable present in a Container. +The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here. @@ -2121,94 +5223,125 @@ EnvVar represents an environment variable present in a Container. - - + + - + - - + + - + - -
namestringaccessModes[]string - Name of the environment variable. Must be a C_IDENTIFIER.
+ accessModes contains the desired access modes the volume should have. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
truefalse
valuestringdataSourceobject - Variable references $(VAR_NAME) are expanded -using the previously defined environment variables in the container and -any service environment variables. If a variable cannot be resolved, -the reference in the input string will be unchanged. Double $$ are reduced -to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. -"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". -Escaped references will never be expanded, regardless of whether the variable -exists or not. -Defaults to "".
+ dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource.
false
valueFromdataSourceRef object - Source for the environment variable's value. Cannot be used if value is not empty.
+ dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects.
false
- - -### Instrumentation.spec.java.env[index].valueFrom -[↩ Parent](#instrumentationspecjavaenvindex) - - - -Source for the environment variable's value. Cannot be used if value is not empty. - - - - - - - - - - - - + + - + - - + + - - + + + + + + + + + + + +
NameTypeDescriptionRequired
configMapKeyRef
resources object - Selects a key of a ConfigMap.
+ resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
false
fieldRefselector object - Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, -spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
+ selector is a label query over volumes to consider for binding.
false
resourceFieldRefobjectstorageClassNamestring - Selects a resource of the container: only resources limits and requests -(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
+ storageClassName is the name of the StorageClass required by the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
false
secretKeyRefobjectvolumeAttributesClassNamestring - Selects a key of a secret in the pod's namespace
+ volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. +If specified, the CSI driver will create or update the volume with the attributes defined +in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, +it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass +will be applied to the claim but it's not allowed to reset this field to empty string once it is set. +If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass +will be set by the persistentvolume controller if it exists. +If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be +set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource +exists. +More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ +(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).
+
false
volumeModestring + volumeMode defines what type of volume is required by the claim. +Value of Filesystem is implied when not included in claim spec.
+
false
volumeNamestring + volumeName is the binding reference to the PersistentVolume backing this claim.
false
-### Instrumentation.spec.java.env[index].valueFrom.configMapKeyRef -[↩ Parent](#instrumentationspecjavaenvindexvaluefrom) +### Instrumentation.spec.nginx.volumeClaimTemplate.spec.dataSource +[↩ Parent](#instrumentationspecnginxvolumeclaimtemplatespec) -Selects a key of a ConfigMap. +dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource. @@ -2220,43 +5353,53 @@ Selects a key of a ConfigMap. - + - + - - + +
keykind string - The key to select.
+ Kind is the type of resource being referenced
true
name string - Name of the referent. -This field is effectively required, but due to backwards compatibility is -allowed to be empty. Instances of this type with an empty value here are -almost certainly wrong. -More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
-
- Default:
+ Name is the name of resource being referenced
falsetrue
optionalbooleanapiGroupstring - Specify whether the ConfigMap or its key must be defined
+ APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
false
-### Instrumentation.spec.java.env[index].valueFrom.fieldRef -[↩ Parent](#instrumentationspecjavaenvindexvaluefrom) +### Instrumentation.spec.nginx.volumeClaimTemplate.spec.dataSourceRef +[↩ Parent](#instrumentationspecnginxvolumeclaimtemplatespec) -Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, -spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. +dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. @@ -2268,30 +5411,51 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI - + - + + + + + + + + + + +
fieldPathkind string - Path of the field to select in the specified API version.
+ Kind is the type of resource being referenced
true
apiVersionname string - Version of the schema the FieldPath is written in terms of, defaults to "v1".
+ Name is the name of resource being referenced
+
true
apiGroupstring + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
+
false
namespacestring + Namespace is the namespace of resource being referenced +Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. +(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
false
-### Instrumentation.spec.java.env[index].valueFrom.resourceFieldRef -[↩ Parent](#instrumentationspecjavaenvindexvaluefrom) +### Instrumentation.spec.nginx.volumeClaimTemplate.spec.resources +[↩ Parent](#instrumentationspecnginxvolumeclaimtemplatespec) -Selects a resource of the container: only resources limits and requests -(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. +resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources @@ -2303,36 +5467,33 @@ Selects a resource of the container: only resources limits and requests - - - - - - - + + - - + +
resourcestring - Required: resource to select
-
true
containerNamestringlimitsmap[string]int or string - Container name: required for volumes, optional for env vars
+ Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
false
divisorint or stringrequestsmap[string]int or string - Specifies the output format of the exposed resources, defaults to "1"
+ Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
false
-### Instrumentation.spec.java.env[index].valueFrom.secretKeyRef -[↩ Parent](#instrumentationspecjavaenvindexvaluefrom) +### Instrumentation.spec.nginx.volumeClaimTemplate.spec.selector +[↩ Parent](#instrumentationspecnginxvolumeclaimtemplatespec) -Selects a key of a secret in the pod's namespace +selector is a label query over volumes to consider for binding. @@ -2344,42 +5505,32 @@ Selects a key of a secret in the pod's namespace - - - - - - - + + - - + +
keystring - The key of the secret to select from. Must be a valid secret key.
-
true
namestringmatchExpressions[]object - Name of the referent. -This field is effectively required, but due to backwards compatibility is -allowed to be empty. Instances of this type with an empty value here are -almost certainly wrong. -More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
-
- Default:
+ matchExpressions is a list of label selector requirements. The requirements are ANDed.
false
optionalbooleanmatchLabelsmap[string]string - Specify whether the Secret or its key must be defined
+ matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed.
false
-### Instrumentation.spec.java.extensions[index] -[↩ Parent](#instrumentationspecjava) - +### Instrumentation.spec.nginx.volumeClaimTemplate.spec.selector.matchExpressions[index] +[↩ Parent](#instrumentationspecnginxvolumeclaimtemplatespecselector) +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. @@ -2391,29 +5542,42 @@ More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/nam - + - + + + + + +
dirkey string - Dir is a directory with extensions auto-instrumentation JAR.
+ key is the label key that the selector applies to.
true
imageoperator string - Image is a container image with extensions auto-instrumentation JAR.
+ operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist.
true
values[]string + values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch.
+
false
-### Instrumentation.spec.java.resources -[↩ Parent](#instrumentationspecjava) +### Instrumentation.spec.nginx.volumeClaimTemplate.metadata +[↩ Parent](#instrumentationspecnginxvolumeclaimtemplate) -Resources describes the compute resource requirements. +May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation. @@ -2425,84 +5589,50 @@ Resources describes the compute resource requirements. - - + + - - + + - - + + - -
claims[]objectannotationsmap[string]string - Claims lists the names of resources, defined in spec.resourceClaims, -that are used by this container. - -This is an alpha field and requires enabling the -DynamicResourceAllocation feature gate. - -This field is immutable. It can only be set for containers.
+
false
limitsmap[string]int or stringfinalizers[]string - Limits describes the maximum amount of compute resources allowed. -More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
requestsmap[string]int or stringlabelsmap[string]string - Requests describes the minimum amount of compute resources required. -If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, -otherwise to an implementation-defined value. Requests cannot exceed Limits. -More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
- - -### Instrumentation.spec.java.resources.claims[index] -[↩ Parent](#instrumentationspecjavaresources) - - - -ResourceClaim references one entry in PodSpec.ResourceClaims. - - - - - - - - - - - + - + - +
NameTypeDescriptionRequired
name string - Name must match the name of one entry in pod.spec.resourceClaims of -the Pod where this field is used. It makes that resource available -inside a container.
+
truefalse
requestnamespace string - Request is the name chosen for a request in the referenced claim. -If empty, everything from the claim is made available, otherwise -only the result of this request.
+
false
-### Instrumentation.spec.nginx +### Instrumentation.spec.nodejs [↩ Parent](#instrumentationspec) -Nginx defines configuration for Nginx auto-instrumentation. +NodeJS defines configuration for nodejs auto-instrumentation. @@ -2514,27 +5644,10 @@ Nginx defines configuration for Nginx auto-instrumentation. - - - - - - - - - - - + @@ -2543,16 +5656,24 @@ If the former var had been defined, then the other vars would be ignored.
- + + + + + + @@ -2565,8 +5686,8 @@ The default size is 200Mi.
attrs[]object - Attrs defines Nginx agent specific attributes. The precedence order is: -`agent default attributes` > `instrument spec attributes` . -Attributes are documented at https://github.com/open-telemetry/opentelemetry-cpp-contrib/tree/main/instrumentation/otel-webserver-module
-
false
configFilestring - Location of Nginx configuration file. -Needed only if different from default "/etx/nginx/nginx.conf"
-
false
envenv []object - Env defines Nginx specific env vars. There are four layers for env vars' definitions and + Env defines nodejs specific env vars. There are four layers for env vars' definitions and the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`. If the former var had been defined, then the other vars would be ignored.
image string - Image is a container image with Nginx SDK and auto-instrumentation.
+ Image is a container image with NodeJS SDK and auto-instrumentation.
false
resourceRequirementsresourceRequirements object Resources describes the compute resource requirements.
false
volumeClaimTemplateobject + VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit
+
false
volumeLimitSize int or string
-### Instrumentation.spec.nginx.attrs[index] -[↩ Parent](#instrumentationspecnginx) +### Instrumentation.spec.nodejs.env[index] +[↩ Parent](#instrumentationspecnodejs) @@ -2604,7 +5725,7 @@ Defaults to "".
false - valueFrom + valueFrom object Source for the environment variable's value. Cannot be used if value is not empty.
@@ -2614,9 +5735,9 @@ Defaults to "".
-### Instrumentation.spec.nginx.attrs[index].valueFrom -[↩ Parent](#instrumentationspecnginxattrsindex) - +### Instrumentation.spec.nodejs.env[index].valueFrom +[↩ Parent](#instrumentationspecnodejsenvindex) + Source for the environment variable's value. Cannot be used if value is not empty. @@ -2631,14 +5752,14 @@ Source for the environment variable's value. Cannot be used if value is not empt - configMapKeyRef + configMapKeyRef object Selects a key of a ConfigMap.
false - fieldRef + fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, @@ -2646,7 +5767,7 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI false - resourceFieldRef + resourceFieldRef object Selects a resource of the container: only resources limits and requests @@ -2654,7 +5775,7 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI false - secretKeyRef + secretKeyRef object Selects a key of a secret in the pod's namespace
@@ -2664,8 +5785,8 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI -### Instrumentation.spec.nginx.attrs[index].valueFrom.configMapKeyRef -[↩ Parent](#instrumentationspecnginxattrsindexvaluefrom) +### Instrumentation.spec.nodejs.env[index].valueFrom.configMapKeyRef +[↩ Parent](#instrumentationspecnodejsenvindexvaluefrom) @@ -2711,8 +5832,8 @@ More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/nam -### Instrumentation.spec.nginx.attrs[index].valueFrom.fieldRef -[↩ Parent](#instrumentationspecnginxattrsindexvaluefrom) +### Instrumentation.spec.nodejs.env[index].valueFrom.fieldRef +[↩ Parent](#instrumentationspecnodejsenvindexvaluefrom) @@ -2746,8 +5867,8 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI -### Instrumentation.spec.nginx.attrs[index].valueFrom.resourceFieldRef -[↩ Parent](#instrumentationspecnginxattrsindexvaluefrom) +### Instrumentation.spec.nodejs.env[index].valueFrom.resourceFieldRef +[↩ Parent](#instrumentationspecnodejsenvindexvaluefrom) @@ -2788,8 +5909,8 @@ Selects a resource of the container: only resources limits and requests -### Instrumentation.spec.nginx.attrs[index].valueFrom.secretKeyRef -[↩ Parent](#instrumentationspecnginxattrsindexvaluefrom) +### Instrumentation.spec.nodejs.env[index].valueFrom.secretKeyRef +[↩ Parent](#instrumentationspecnodejsenvindexvaluefrom) @@ -2835,12 +5956,63 @@ More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/nam -### Instrumentation.spec.nginx.env[index] -[↩ Parent](#instrumentationspecnginx) +### Instrumentation.spec.nodejs.resourceRequirements +[↩ Parent](#instrumentationspecnodejs) -EnvVar represents an environment variable present in a Container. +Resources describes the compute resource requirements. + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
claims[]object + Claims lists the names of resources, defined in spec.resourceClaims, +that are used by this container. + +This is an alpha field and requires enabling the +DynamicResourceAllocation feature gate. + +This field is immutable. It can only be set for containers.
+
false
limitsmap[string]int or string + Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
requestsmap[string]int or string + Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+
false
+ + +### Instrumentation.spec.nodejs.resourceRequirements.claims[index] +[↩ Parent](#instrumentationspecnodejsresourcerequirements) + + + +ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -2855,41 +6027,73 @@ EnvVar represents an environment variable present in a Container. - + + +
name string - Name of the environment variable. Must be a C_IDENTIFIER.
+ Name must match the name of one entry in pod.spec.resourceClaims of +the Pod where this field is used. It makes that resource available +inside a container.
true
valuerequest string - Variable references $(VAR_NAME) are expanded -using the previously defined environment variables in the container and -any service environment variables. If a variable cannot be resolved, -the reference in the input string will be unchanged. Double $$ are reduced -to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. -"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". -Escaped references will never be expanded, regardless of whether the variable -exists or not. -Defaults to "".
+ Request is the name chosen for a request in the referenced claim. +If empty, everything from the claim is made available, otherwise +only the result of this request.
false
+ + +### Instrumentation.spec.nodejs.volumeClaimTemplate +[↩ Parent](#instrumentationspecnodejs) + + + +VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit + + + + + + + + + + + + + + + - +
NameTypeDescriptionRequired
specobject + The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here.
+
true
valueFrommetadata object - Source for the environment variable's value. Cannot be used if value is not empty.
+ May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation.
false
-### Instrumentation.spec.nginx.env[index].valueFrom -[↩ Parent](#instrumentationspecnginxenvindex) +### Instrumentation.spec.nodejs.volumeClaimTemplate.spec +[↩ Parent](#instrumentationspecnodejsvolumeclaimtemplate) -Source for the environment variable's value. Cannot be used if value is not empty. +The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here. @@ -2901,45 +6105,125 @@ Source for the environment variable's value. Cannot be used if value is not empt - + + + + + + - + - + - + + + + + + + + + + + + + + + + + + + + +
configMapKeyRefaccessModes[]string + accessModes contains the desired access modes the volume should have. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+
false
dataSource object - Selects a key of a ConfigMap.
+ dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource.
false
fieldRefdataSourceRef object - Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, -spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
+ dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects.
false
resourceFieldRefresources object - Selects a resource of the container: only resources limits and requests -(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
+ resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
false
secretKeyRefselector object - Selects a key of a secret in the pod's namespace
+ selector is a label query over volumes to consider for binding.
+
false
storageClassNamestring + storageClassName is the name of the StorageClass required by the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+
false
volumeAttributesClassNamestring + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. +If specified, the CSI driver will create or update the volume with the attributes defined +in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, +it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass +will be applied to the claim but it's not allowed to reset this field to empty string once it is set. +If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass +will be set by the persistentvolume controller if it exists. +If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be +set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource +exists. +More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ +(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).
+
false
volumeModestring + volumeMode defines what type of volume is required by the claim. +Value of Filesystem is implied when not included in claim spec.
+
false
volumeNamestring + volumeName is the binding reference to the PersistentVolume backing this claim.
false
-### Instrumentation.spec.nginx.env[index].valueFrom.configMapKeyRef -[↩ Parent](#instrumentationspecnginxenvindexvaluefrom) +### Instrumentation.spec.nodejs.volumeClaimTemplate.spec.dataSource +[↩ Parent](#instrumentationspecnodejsvolumeclaimtemplatespec) -Selects a key of a ConfigMap. +dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource. @@ -2951,43 +6235,53 @@ Selects a key of a ConfigMap. - + - + - - + +
keykind string - The key to select.
+ Kind is the type of resource being referenced
true
name string - Name of the referent. -This field is effectively required, but due to backwards compatibility is -allowed to be empty. Instances of this type with an empty value here are -almost certainly wrong. -More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
-
- Default:
+ Name is the name of resource being referenced
falsetrue
optionalbooleanapiGroupstring - Specify whether the ConfigMap or its key must be defined
+ APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
false
-### Instrumentation.spec.nginx.env[index].valueFrom.fieldRef -[↩ Parent](#instrumentationspecnginxenvindexvaluefrom) +### Instrumentation.spec.nodejs.volumeClaimTemplate.spec.dataSourceRef +[↩ Parent](#instrumentationspecnodejsvolumeclaimtemplatespec) -Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, -spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. +dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. @@ -2999,30 +6293,51 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI - + - + + + + + + + + + + +
fieldPathkind string - Path of the field to select in the specified API version.
+ Kind is the type of resource being referenced
true
apiVersionname string - Version of the schema the FieldPath is written in terms of, defaults to "v1".
+ Name is the name of resource being referenced
+
true
apiGroupstring + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
+
false
namespacestring + Namespace is the namespace of resource being referenced +Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. +(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
false
-### Instrumentation.spec.nginx.env[index].valueFrom.resourceFieldRef -[↩ Parent](#instrumentationspecnginxenvindexvaluefrom) +### Instrumentation.spec.nodejs.volumeClaimTemplate.spec.resources +[↩ Parent](#instrumentationspecnodejsvolumeclaimtemplatespec) -Selects a resource of the container: only resources limits and requests -(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. +resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources @@ -3034,36 +6349,33 @@ Selects a resource of the container: only resources limits and requests - - - - - - - + + - - + +
resourcestring - Required: resource to select
-
true
containerNamestringlimitsmap[string]int or string - Container name: required for volumes, optional for env vars
+ Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
false
divisorint or stringrequestsmap[string]int or string - Specifies the output format of the exposed resources, defaults to "1"
+ Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
false
-### Instrumentation.spec.nginx.env[index].valueFrom.secretKeyRef -[↩ Parent](#instrumentationspecnginxenvindexvaluefrom) +### Instrumentation.spec.nodejs.volumeClaimTemplate.spec.selector +[↩ Parent](#instrumentationspecnodejsvolumeclaimtemplatespec) -Selects a key of a secret in the pod's namespace +selector is a label query over volumes to consider for binding. @@ -3075,42 +6387,32 @@ Selects a key of a secret in the pod's namespace - - - - - - - + + - - + +
keystring - The key of the secret to select from. Must be a valid secret key.
-
true
namestringmatchExpressions[]object - Name of the referent. -This field is effectively required, but due to backwards compatibility is -allowed to be empty. Instances of this type with an empty value here are -almost certainly wrong. -More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
-
- Default:
+ matchExpressions is a list of label selector requirements. The requirements are ANDed.
false
optionalbooleanmatchLabelsmap[string]string - Specify whether the Secret or its key must be defined
+ matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed.
false
-### Instrumentation.spec.nginx.resourceRequirements -[↩ Parent](#instrumentationspecnginx) +### Instrumentation.spec.nodejs.volumeClaimTemplate.spec.selector.matchExpressions[index] +[↩ Parent](#instrumentationspecnodejsvolumeclaimtemplatespecselector) -Resources describes the compute resource requirements. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. @@ -3122,46 +6424,42 @@ Resources describes the compute resource requirements. - - + + - + - - + + - + - - + +
claims[]objectkeystring - Claims lists the names of resources, defined in spec.resourceClaims, -that are used by this container. - -This is an alpha field and requires enabling the -DynamicResourceAllocation feature gate. - -This field is immutable. It can only be set for containers.
+ key is the label key that the selector applies to.
falsetrue
limitsmap[string]int or stringoperatorstring - Limits describes the maximum amount of compute resources allowed. -More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+ operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist.
falsetrue
requestsmap[string]int or stringvalues[]string - Requests describes the minimum amount of compute resources required. -If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, -otherwise to an implementation-defined value. Requests cannot exceed Limits. -More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+ values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch.
false
-### Instrumentation.spec.nginx.resourceRequirements.claims[index] -[↩ Parent](#instrumentationspecnginxresourcerequirements) +### Instrumentation.spec.nodejs.volumeClaimTemplate.metadata +[↩ Parent](#instrumentationspecnodejsvolumeclaimtemplate) -ResourceClaim references one entry in PodSpec.ResourceClaims. +May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation. @@ -3173,33 +6471,50 @@ ResourceClaim references one entry in PodSpec.ResourceClaims. + + + + + + + + + + + + + + + - + - +
annotationsmap[string]string +
+
false
finalizers[]string +
+
false
labelsmap[string]string +
+
false
name string - Name must match the name of one entry in pod.spec.resourceClaims of -the Pod where this field is used. It makes that resource available -inside a container.
+
truefalse
requestnamespace string - Request is the name chosen for a request in the referenced claim. -If empty, everything from the claim is made available, otherwise -only the result of this request.
+
false
-### Instrumentation.spec.nodejs +### Instrumentation.spec.python [↩ Parent](#instrumentationspec) -NodeJS defines configuration for nodejs auto-instrumentation. +Python defines configuration for python auto-instrumentation. @@ -3211,10 +6526,10 @@ NodeJS defines configuration for nodejs auto-instrumentation. - + @@ -3223,16 +6538,24 @@ If the former var had been defined, then the other vars would be ignored.
- + + + + + + @@ -3245,8 +6568,8 @@ The default size is 200Mi.
envenv []object - Env defines nodejs specific env vars. There are four layers for env vars' definitions and + Env defines python specific env vars. There are four layers for env vars' definitions and the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`. If the former var had been defined, then the other vars would be ignored.
image string - Image is a container image with NodeJS SDK and auto-instrumentation.
+ Image is a container image with Python SDK and auto-instrumentation.
false
resourceRequirementsresourceRequirements object Resources describes the compute resource requirements.
false
volumeClaimTemplateobject + VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit
+
false
volumeLimitSize int or string
-### Instrumentation.spec.nodejs.env[index] -[↩ Parent](#instrumentationspecnodejs) +### Instrumentation.spec.python.env[index] +[↩ Parent](#instrumentationspecpython) @@ -3284,7 +6607,7 @@ Defaults to "".
false - valueFrom + valueFrom object Source for the environment variable's value. Cannot be used if value is not empty.
@@ -3294,8 +6617,8 @@ Defaults to "".
-### Instrumentation.spec.nodejs.env[index].valueFrom -[↩ Parent](#instrumentationspecnodejsenvindex) +### Instrumentation.spec.python.env[index].valueFrom +[↩ Parent](#instrumentationspecpythonenvindex) @@ -3311,14 +6634,14 @@ Source for the environment variable's value. Cannot be used if value is not empt - configMapKeyRef + configMapKeyRef object Selects a key of a ConfigMap.
false - fieldRef + fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, @@ -3326,7 +6649,7 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI false - resourceFieldRef + resourceFieldRef object Selects a resource of the container: only resources limits and requests @@ -3334,7 +6657,7 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI false - secretKeyRef + secretKeyRef object Selects a key of a secret in the pod's namespace
@@ -3344,8 +6667,8 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI -### Instrumentation.spec.nodejs.env[index].valueFrom.configMapKeyRef -[↩ Parent](#instrumentationspecnodejsenvindexvaluefrom) +### Instrumentation.spec.python.env[index].valueFrom.configMapKeyRef +[↩ Parent](#instrumentationspecpythonenvindexvaluefrom) @@ -3391,8 +6714,8 @@ More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/nam -### Instrumentation.spec.nodejs.env[index].valueFrom.fieldRef -[↩ Parent](#instrumentationspecnodejsenvindexvaluefrom) +### Instrumentation.spec.python.env[index].valueFrom.fieldRef +[↩ Parent](#instrumentationspecpythonenvindexvaluefrom) @@ -3426,8 +6749,8 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI -### Instrumentation.spec.nodejs.env[index].valueFrom.resourceFieldRef -[↩ Parent](#instrumentationspecnodejsenvindexvaluefrom) +### Instrumentation.spec.python.env[index].valueFrom.resourceFieldRef +[↩ Parent](#instrumentationspecpythonenvindexvaluefrom) @@ -3468,8 +6791,8 @@ Selects a resource of the container: only resources limits and requests -### Instrumentation.spec.nodejs.env[index].valueFrom.secretKeyRef -[↩ Parent](#instrumentationspecnodejsenvindexvaluefrom) +### Instrumentation.spec.python.env[index].valueFrom.secretKeyRef +[↩ Parent](#instrumentationspecpythonenvindexvaluefrom) @@ -3515,8 +6838,8 @@ More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/nam -### Instrumentation.spec.nodejs.resourceRequirements -[↩ Parent](#instrumentationspecnodejs) +### Instrumentation.spec.python.resourceRequirements +[↩ Parent](#instrumentationspecpython) @@ -3532,7 +6855,7 @@ Resources describes the compute resource requirements. - claims + claims []object Claims lists the names of resources, defined in spec.resourceClaims, @@ -3566,8 +6889,8 @@ More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-co -### Instrumentation.spec.nodejs.resourceRequirements.claims[index] -[↩ Parent](#instrumentationspecnodejsresourcerequirements) +### Instrumentation.spec.python.resourceRequirements.claims[index] +[↩ Parent](#instrumentationspecpythonresourcerequirements) @@ -3604,12 +6927,13 @@ only the result of this request.
-### Instrumentation.spec.python -[↩ Parent](#instrumentationspec) +### Instrumentation.spec.python.volumeClaimTemplate +[↩ Parent](#instrumentationspecpython) -Python defines configuration for python auto-instrumentation. +VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation. +If omitted, an emptyDir is used with size limit VolumeSizeLimit @@ -3621,46 +6945,37 @@ Python defines configuration for python auto-instrumentation. - - - - - - - - - - - + - + - - + +
env[]object - Env defines python specific env vars. There are four layers for env vars' definitions and -the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`. -If the former var had been defined, then the other vars would be ignored.
-
false
imagestring - Image is a container image with Python SDK and auto-instrumentation.
-
false
resourceRequirementsspec object - Resources describes the compute resource requirements.
+ The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here.
falsetrue
volumeLimitSizeint or stringmetadataobject - VolumeSizeLimit defines size limit for volume used for auto-instrumentation. -The default size is 200Mi.
+ May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation.
false
-### Instrumentation.spec.python.env[index] -[↩ Parent](#instrumentationspecpython) +### Instrumentation.spec.python.volumeClaimTemplate.spec +[↩ Parent](#instrumentationspecpythonvolumeclaimtemplate) -EnvVar represents an environment variable present in a Container. +The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here. @@ -3672,94 +6987,125 @@ EnvVar represents an environment variable present in a Container. - - + + - + - - + + - + - -
namestringaccessModes[]string - Name of the environment variable. Must be a C_IDENTIFIER.
+ accessModes contains the desired access modes the volume should have. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
truefalse
valuestringdataSourceobject - Variable references $(VAR_NAME) are expanded -using the previously defined environment variables in the container and -any service environment variables. If a variable cannot be resolved, -the reference in the input string will be unchanged. Double $$ are reduced -to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. -"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". -Escaped references will never be expanded, regardless of whether the variable -exists or not. -Defaults to "".
+ dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource.
false
valueFromdataSourceRef object - Source for the environment variable's value. Cannot be used if value is not empty.
+ dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects.
false
- - -### Instrumentation.spec.python.env[index].valueFrom -[↩ Parent](#instrumentationspecpythonenvindex) - - - -Source for the environment variable's value. Cannot be used if value is not empty. - - - - - - - - - - - - + + - + - - + + - - + + + + + + + + + + + +
NameTypeDescriptionRequired
configMapKeyRef
resources object - Selects a key of a ConfigMap.
+ resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
false
fieldRefselector object - Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, -spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
+ selector is a label query over volumes to consider for binding.
false
resourceFieldRefobjectstorageClassNamestring - Selects a resource of the container: only resources limits and requests -(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
+ storageClassName is the name of the StorageClass required by the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
false
secretKeyRefobjectvolumeAttributesClassNamestring - Selects a key of a secret in the pod's namespace
+ volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. +If specified, the CSI driver will create or update the volume with the attributes defined +in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, +it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass +will be applied to the claim but it's not allowed to reset this field to empty string once it is set. +If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass +will be set by the persistentvolume controller if it exists. +If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be +set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource +exists. +More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ +(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).
+
false
volumeModestring + volumeMode defines what type of volume is required by the claim. +Value of Filesystem is implied when not included in claim spec.
+
false
volumeNamestring + volumeName is the binding reference to the PersistentVolume backing this claim.
false
-### Instrumentation.spec.python.env[index].valueFrom.configMapKeyRef -[↩ Parent](#instrumentationspecpythonenvindexvaluefrom) +### Instrumentation.spec.python.volumeClaimTemplate.spec.dataSource +[↩ Parent](#instrumentationspecpythonvolumeclaimtemplatespec) -Selects a key of a ConfigMap. +dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource. @@ -3771,43 +7117,53 @@ Selects a key of a ConfigMap. - + - + - - + +
keykind string - The key to select.
+ Kind is the type of resource being referenced
true
name string - Name of the referent. -This field is effectively required, but due to backwards compatibility is -allowed to be empty. Instances of this type with an empty value here are -almost certainly wrong. -More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
-
- Default:
+ Name is the name of resource being referenced
falsetrue
optionalbooleanapiGroupstring - Specify whether the ConfigMap or its key must be defined
+ APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
false
-### Instrumentation.spec.python.env[index].valueFrom.fieldRef -[↩ Parent](#instrumentationspecpythonenvindexvaluefrom) +### Instrumentation.spec.python.volumeClaimTemplate.spec.dataSourceRef +[↩ Parent](#instrumentationspecpythonvolumeclaimtemplatespec) -Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, -spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. +dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. @@ -3819,30 +7175,51 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI - + - + + + + + + + + + + +
fieldPathkind string - Path of the field to select in the specified API version.
+ Kind is the type of resource being referenced
true
apiVersionname string - Version of the schema the FieldPath is written in terms of, defaults to "v1".
+ Name is the name of resource being referenced
+
true
apiGroupstring + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required.
+
false
namespacestring + Namespace is the namespace of resource being referenced +Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. +(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
false
-### Instrumentation.spec.python.env[index].valueFrom.resourceFieldRef -[↩ Parent](#instrumentationspecpythonenvindexvaluefrom) +### Instrumentation.spec.python.volumeClaimTemplate.spec.resources +[↩ Parent](#instrumentationspecpythonvolumeclaimtemplatespec) -Selects a resource of the container: only resources limits and requests -(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. +resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources @@ -3854,36 +7231,33 @@ Selects a resource of the container: only resources limits and requests - - - - - - - + + - - + +
resourcestring - Required: resource to select
-
true
containerNamestringlimitsmap[string]int or string - Container name: required for volumes, optional for env vars
+ Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
false
divisorint or stringrequestsmap[string]int or string - Specifies the output format of the exposed resources, defaults to "1"
+ Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
false
-### Instrumentation.spec.python.env[index].valueFrom.secretKeyRef -[↩ Parent](#instrumentationspecpythonenvindexvaluefrom) +### Instrumentation.spec.python.volumeClaimTemplate.spec.selector +[↩ Parent](#instrumentationspecpythonvolumeclaimtemplatespec) -Selects a key of a secret in the pod's namespace +selector is a label query over volumes to consider for binding. @@ -3895,42 +7269,32 @@ Selects a key of a secret in the pod's namespace - - - - - - - + + - - + +
keystring - The key of the secret to select from. Must be a valid secret key.
-
true
namestringmatchExpressions[]object - Name of the referent. -This field is effectively required, but due to backwards compatibility is -allowed to be empty. Instances of this type with an empty value here are -almost certainly wrong. -More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
-
- Default:
+ matchExpressions is a list of label selector requirements. The requirements are ANDed.
false
optionalbooleanmatchLabelsmap[string]string - Specify whether the Secret or its key must be defined
+ matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed.
false
-### Instrumentation.spec.python.resourceRequirements -[↩ Parent](#instrumentationspecpython) +### Instrumentation.spec.python.volumeClaimTemplate.spec.selector.matchExpressions[index] +[↩ Parent](#instrumentationspecpythonvolumeclaimtemplatespecselector) -Resources describes the compute resource requirements. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. @@ -3942,46 +7306,42 @@ Resources describes the compute resource requirements. - - + + - + - - + + - + - - + +
claims[]objectkeystring - Claims lists the names of resources, defined in spec.resourceClaims, -that are used by this container. - -This is an alpha field and requires enabling the -DynamicResourceAllocation feature gate. - -This field is immutable. It can only be set for containers.
+ key is the label key that the selector applies to.
falsetrue
limitsmap[string]int or stringoperatorstring - Limits describes the maximum amount of compute resources allowed. -More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+ operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist.
falsetrue
requestsmap[string]int or stringvalues[]string - Requests describes the minimum amount of compute resources required. -If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, -otherwise to an implementation-defined value. Requests cannot exceed Limits. -More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+ values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch.
false
-### Instrumentation.spec.python.resourceRequirements.claims[index] -[↩ Parent](#instrumentationspecpythonresourcerequirements) +### Instrumentation.spec.python.volumeClaimTemplate.metadata +[↩ Parent](#instrumentationspecpythonvolumeclaimtemplate) -ResourceClaim references one entry in PodSpec.ResourceClaims. +May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation. @@ -3993,21 +7353,38 @@ ResourceClaim references one entry in PodSpec.ResourceClaims. + + + + + + + + + + + + + + + - + - + @@ -31103,6 +34480,15 @@ This only works with the following OpenTelemetryCollector mode's: daemonset, sta ObservabilitySpec defines how telemetry data gets handled.
+ + + + + @@ -40863,6 +44249,49 @@ The operator.observability.prometheus feature gate must be enabled to use this f
annotationsmap[string]string +
+
false
finalizers[]string +
+
false
labelsmap[string]string +
+
false
name string - Name must match the name of one entry in pod.spec.resourceClaims of -the Pod where this field is used. It makes that resource available -inside a container.
+
truefalse
requestnamespace string - Request is the name chosen for a request in the referenced claim. -If empty, everything from the claim is made available, otherwise -only the result of this request.
+
false
false
persistentVolumeClaimRetentionPolicyobject + PersistentVolumeClaimRetentionPolicy describes the lifecycle of persistent volume claims +created from volumeClaimTemplates. +This only works with the following OpenTelemetryCollector modes: statefulset.
+
false
podAnnotations map[string]string
+### OpenTelemetryCollector.spec.persistentVolumeClaimRetentionPolicy +[↩ Parent](#opentelemetrycollectorspec-1) + + + +PersistentVolumeClaimRetentionPolicy describes the lifecycle of persistent volume claims +created from volumeClaimTemplates. +This only works with the following OpenTelemetryCollector modes: statefulset. + + + + + + + + + + + + + + + + + + + + + +
NameTypeDescriptionRequired
whenDeletedstring + WhenDeleted specifies what happens to PVCs created from StatefulSet +VolumeClaimTemplates when the StatefulSet is deleted. The default policy +of `Retain` causes PVCs to not be affected by StatefulSet deletion. The +`Delete` policy causes those PVCs to be deleted.
+
false
whenScaledstring + WhenScaled specifies what happens to PVCs created from StatefulSet +VolumeClaimTemplates when the StatefulSet is scaled down. The default +policy of `Retain` causes PVCs to not be affected by a scaledown. The +`Delete` policy causes the associated PVCs for any excess pods above +the replica count to be deleted.
+
false
+ + ### OpenTelemetryCollector.spec.podDisruptionBudget [↩ Parent](#opentelemetrycollectorspec-1) diff --git a/docs/compatibility.md b/docs/compatibility.md new file mode 100644 index 0000000000..b1b68893e8 --- /dev/null +++ b/docs/compatibility.md @@ -0,0 +1,76 @@ +# Compatibility + +This document details compatibility guarantees the OpenTelemetry Operator offers for its dependencies and platforms. + +## Go + +When productised as a go libary or custom distribution the OpenTelemetry Operator project attempts to follow the supported go versions as [defined by the Go team](https://go.dev/doc/devel/release#policy). + +Similar to the [opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector?tab=readme-ov-file#compatibility), removing support for an unsupported Go version is not considered a breaking change. + +Support for Go versions on the OpenTelemetry Operator is updated as follows: + + The first release after the release of a new Go minor version N will add build and tests steps for the new Go minor version. + The first release after the release of a new Go minor version N will remove support for Go version N-2. + +Official OpenTelemetry Operator binaries may be built with any supported Go version. + +## Kubernetes + +As a rule, the operator tries to be compatible with as wide a range of Kubernetes versions as possible. + +We will *always* support all the versions maintained by the upstream Kubernetes project, as detailed on its [releases page][kubernetes_releases]. + +We will make every effort to support all Kubernetes versions maintained by popular distributions and hosted platforms. For example, you can realistically expect us to always support all versions offered by [OpenShift][openshift_support] and [AWS EKS][aws_support]. + +Whenever we do remove support for a Kubernetes version, we will give at least one month's notice beforehand. + +The [compatibility matrix](#compatibility-matrix) below precisely shows the supported Kubernetes versions for each operator release. + +## OpenTelemetry Operator vs. OpenTelemetry Collector + +The OpenTelemetry Operator follows the same versioning as the operand (OpenTelemetry Collector) up to the minor part of the version. For example, the OpenTelemetry Operator v0.18.1 tracks OpenTelemetry Collector 0.18.0. The patch part of the version indicates the patch level of the operator itself, not that of OpenTelemetry Collector. Whenever a new patch version is released for OpenTelemetry Collector, we'll release a new patch version of the operator. + +By default, the OpenTelemetry Operator ensures consistent versioning between itself and the managed `OpenTelemetryCollector` resources. That is, if the OpenTelemetry Operator is based on version `0.40.0`, it will create resources with an underlying OpenTelemetry Collector at version `0.40.0`. + +When a custom `Spec.Image` is used with an `OpenTelemetryCollector` resource, the OpenTelemetry Operator will not manage this versioning and upgrading. In this scenario, it is best practice that the OpenTelemetry Operator version should match the underlying core version. Given a `OpenTelemetryCollector` resource with a `Spec.Image` configured to a custom image based on underlying OpenTelemetry Collector at version `0.40.0`, it is recommended that the OpenTelemetry Operator is kept at version `0.40.0`. + +## Compatibility matrix + +We use `cert-manager` for some features of this operator and the third column shows the versions of the `cert-manager` that are known to work with this operator's versions. + +The Target Allocator supports prometheus-operator CRDs like ServiceMonitor, and it does so by using packages imported from prometheus-operator itself. The table shows which version is shipped with a given operator version. +Generally speaking, these are backwards compatible, but specific features require the appropriate package versions. + +The OpenTelemetry Operator _might_ work on versions outside of the given range, but when opening new issues, please make sure to test your scenario on a supported version. + +| OpenTelemetry Operator | Kubernetes | Cert-Manager | Prometheus-Operator | +|------------------------|----------------| ------------ |---------------------| +| v0.113.0 | v1.23 to v1.31 | v1 | v0.76.0 | +| v0.112.0 | v1.23 to v1.31 | v1 | v0.76.0 | +| v0.111.0 | v1.23 to v1.31 | v1 | v0.76.0 | +| v0.110.0 | v1.23 to v1.31 | v1 | v0.76.0 | +| v0.109.0 | v1.23 to v1.31 | v1 | v0.76.0 | +| v0.108.0 | v1.23 to v1.31 | v1 | v0.76.0 | +| v0.107.0 | v1.23 to v1.30 | v1 | v0.75.0 | +| v0.106.0 | v1.23 to v1.30 | v1 | v0.75.0 | +| v0.105.0 | v1.23 to v1.30 | v1 | v0.74.0 | +| v0.104.0 | v1.23 to v1.30 | v1 | v0.74.0 | +| v0.103.0 | v1.23 to v1.30 | v1 | v0.74.0 | +| v0.102.0 | v1.23 to v1.30 | v1 | v0.71.2 | +| v0.101.0 | v1.23 to v1.30 | v1 | v0.71.2 | +| v0.100.0 | v1.23 to v1.29 | v1 | v0.71.2 | +| v0.99.0 | v1.23 to v1.29 | v1 | v0.71.2 | +| v0.98.0 | v1.23 to v1.29 | v1 | v0.71.2 | +| v0.97.0 | v1.23 to v1.29 | v1 | v0.71.2 | +| v0.96.0 | v1.23 to v1.29 | v1 | v0.71.2 | +| v0.95.0 | v1.23 to v1.29 | v1 | v0.71.2 | +| v0.94.0 | v1.23 to v1.29 | v1 | v0.71.0 | +| v0.93.0 | v1.23 to v1.29 | v1 | v0.71.0 | +| v0.92.0 | v1.23 to v1.29 | v1 | v0.71.0 | +| v0.91.0 | v1.23 to v1.29 | v1 | v0.70.0 | +| v0.90.0 | v1.23 to v1.28 | v1 | v0.69.1 | + +[kubernetes_releases]: https://kubernetes.io/releases/ +[openshift_support]: https://access.redhat.com/support/policy/updates/openshift +[aws_support]: https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html \ No newline at end of file diff --git a/go.mod b/go.mod index 3947fed58d..7b8df7f64d 100644 --- a/go.mod +++ b/go.mod @@ -6,7 +6,7 @@ retract v1.51.0 require ( dario.cat/mergo v1.0.1 - github.com/Masterminds/semver/v3 v3.3.0 + github.com/Masterminds/semver/v3 v3.3.1 github.com/blang/semver/v4 v4.0.0 github.com/buraksezer/consistent v0.10.0 github.com/cespare/xxhash/v2 v2.3.0 @@ -22,43 +22,44 @@ require ( github.com/openshift/api v0.0.0-20240124164020-e2ce40831f2e github.com/operator-framework/api v0.27.0 github.com/operator-framework/operator-lib v0.15.0 - github.com/prometheus-operator/prometheus-operator v0.76.0 + github.com/prometheus-operator/prometheus-operator v0.76.2 github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.76.2 github.com/prometheus-operator/prometheus-operator/pkg/client v0.76.2 - github.com/prometheus/client_golang v1.20.4 - github.com/prometheus/common v0.60.0 - github.com/prometheus/prometheus v0.54.1 + github.com/prometheus/client_golang v1.20.5 + github.com/prometheus/common v0.60.1 + github.com/prometheus/prometheus v0.55.1 github.com/shirou/gopsutil v3.21.11+incompatible github.com/spf13/pflag v1.0.5 - github.com/stretchr/testify v1.9.0 - go.opentelemetry.io/collector/featuregate v1.17.0 - go.opentelemetry.io/otel v1.30.0 - go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.30.0 - go.opentelemetry.io/otel/exporters/prometheus v0.52.0 - go.opentelemetry.io/otel/metric v1.30.0 - go.opentelemetry.io/otel/sdk v1.30.0 - go.opentelemetry.io/otel/sdk/metric v1.30.0 + github.com/stretchr/testify v1.10.0 + go.opentelemetry.io/collector/featuregate v1.20.0 + go.opentelemetry.io/otel v1.32.0 + go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.32.0 + go.opentelemetry.io/otel/exporters/prometheus v0.54.0 + go.opentelemetry.io/otel/metric v1.32.0 + go.opentelemetry.io/otel/sdk v1.32.0 + go.opentelemetry.io/otel/sdk/metric v1.32.0 go.uber.org/multierr v1.11.0 go.uber.org/zap v1.27.0 gopkg.in/yaml.v2 v2.4.0 gopkg.in/yaml.v3 v3.0.1 - k8s.io/api v0.31.1 - k8s.io/apiextensions-apiserver v0.31.1 - k8s.io/apimachinery v0.31.1 - k8s.io/client-go v0.31.1 - k8s.io/component-base v0.31.1 + k8s.io/api v0.31.3 + k8s.io/apiextensions-apiserver v0.31.3 + k8s.io/apimachinery v0.31.3 + k8s.io/client-go v0.31.3 + k8s.io/component-base v0.31.3 k8s.io/klog/v2 v2.130.1 - k8s.io/kubectl v0.31.1 - k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 - sigs.k8s.io/controller-runtime v0.19.0 + k8s.io/kubectl v0.31.3 + k8s.io/utils v0.0.0-20240921022957-49e7df575cb6 + sigs.k8s.io/controller-runtime v0.19.2 + sigs.k8s.io/gateway-api v1.1.0 // indirect sigs.k8s.io/yaml v1.4.0 ) require ( - cloud.google.com/go/auth v0.7.0 // indirect - cloud.google.com/go/auth/oauth2adapt v0.2.2 // indirect - cloud.google.com/go/compute/metadata v0.4.0 // indirect - github.com/Azure/azure-sdk-for-go/sdk/azcore v1.13.0 // indirect + cloud.google.com/go/auth v0.9.4 // indirect + cloud.google.com/go/auth/oauth2adapt v0.2.4 // indirect + cloud.google.com/go/compute/metadata v0.5.1 // indirect + github.com/Azure/azure-sdk-for-go/sdk/azcore v1.14.0 // indirect github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0 // indirect github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5 v5.7.0 // indirect @@ -69,26 +70,27 @@ require ( github.com/alecthomas/units v0.0.0-20240626203959-61d1e3462e30 // indirect github.com/armon/go-metrics v0.4.1 // indirect github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect - github.com/aws/aws-sdk-go v1.54.19 // indirect + github.com/aws/aws-sdk-go v1.55.5 // indirect github.com/beorn7/perks v1.0.1 // indirect github.com/bytedance/sonic v1.11.6 // indirect github.com/bytedance/sonic/loader v0.1.1 // indirect github.com/cenkalti/backoff/v4 v4.3.0 // indirect + github.com/cert-manager/cert-manager v1.16.2 github.com/cloudwego/base64x v0.1.4 // indirect github.com/cloudwego/iasm v0.2.0 // indirect - github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b // indirect + github.com/cncf/xds/go v0.0.0-20240723142845-024c85f92f20 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/dennwc/varint v1.0.0 // indirect - github.com/digitalocean/godo v1.118.0 // indirect + github.com/digitalocean/godo v1.125.0 // indirect github.com/distribution/reference v0.6.0 // indirect - github.com/docker/docker v27.1.1+incompatible // indirect + github.com/docker/docker v27.2.0+incompatible // indirect github.com/docker/go-connections v0.4.0 // indirect github.com/docker/go-units v0.5.0 // indirect github.com/edsrzf/mmap-go v1.1.0 // indirect github.com/efficientgo/core v1.0.0-rc.2 // indirect github.com/emicklei/go-restful/v3 v3.12.1 // indirect - github.com/envoyproxy/go-control-plane v0.12.1-0.20240621013728-1eb8caab5155 // indirect - github.com/envoyproxy/protoc-gen-validate v1.0.4 // indirect + github.com/envoyproxy/go-control-plane v0.13.0 // indirect + github.com/envoyproxy/protoc-gen-validate v1.1.0 // indirect github.com/evanphx/json-patch/v5 v5.9.0 // indirect github.com/facette/natsort v0.0.0-20181210072756-2cd4dd1e2dcb // indirect github.com/fatih/color v1.16.0 // indirect @@ -125,14 +127,14 @@ require ( github.com/google/go-cmp v0.6.0 // indirect github.com/google/go-querystring v1.1.0 // indirect github.com/google/gofuzz v1.2.0 // indirect - github.com/google/s2a-go v0.1.7 // indirect - github.com/googleapis/enterprise-certificate-proxy v0.3.2 // indirect - github.com/googleapis/gax-go/v2 v2.12.5 // indirect - github.com/gophercloud/gophercloud v1.13.0 // indirect + github.com/google/s2a-go v0.1.8 // indirect + github.com/googleapis/enterprise-certificate-proxy v0.3.4 // indirect + github.com/googleapis/gax-go/v2 v2.13.0 // indirect + github.com/gophercloud/gophercloud v1.14.0 // indirect github.com/gorilla/websocket v1.5.1 // indirect github.com/grafana/regexp v0.0.0-20240518133315-a468a5bfb3bc // indirect - github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 // indirect - github.com/hashicorp/consul/api v1.29.2 // indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.23.0 // indirect + github.com/hashicorp/consul/api v1.29.4 // indirect github.com/hashicorp/cronexpr v1.1.2 // indirect github.com/hashicorp/errwrap v1.1.0 // indirect github.com/hashicorp/go-cleanhttp v0.5.2 // indirect @@ -145,11 +147,11 @@ require ( github.com/hashicorp/golang-lru v0.6.0 // indirect github.com/hashicorp/nomad/api v0.0.0-20240717122358-3d93bd3778f3 // indirect github.com/hashicorp/serf v0.10.1 // indirect - github.com/hetznercloud/hcloud-go/v2 v2.10.2 // indirect + github.com/hetznercloud/hcloud-go/v2 v2.13.1 // indirect github.com/imdario/mergo v0.3.16 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect - github.com/ionos-cloud/sdk-go/v6 v6.1.11 // indirect - github.com/jmespath/go-jmespath v0.4.0 // indirect + github.com/ionos-cloud/sdk-go/v6 v6.2.1 // indirect + github.com/jmespath/go-jmespath v0.4.1-0.20220621161143-b0104c826a24 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/jpillora/backoff v1.0.0 // indirect github.com/klauspost/compress v1.17.9 // indirect @@ -157,12 +159,12 @@ require ( github.com/kolo/xmlrpc v0.0.0-20220921171641-a4b6fa1dd06b // indirect github.com/kylelemons/godebug v1.1.0 // indirect github.com/leodido/go-urn v1.4.0 // indirect - github.com/linode/linodego v1.37.0 // indirect + github.com/linode/linodego v1.40.0 // indirect github.com/mailru/easyjson v0.7.7 // indirect github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-isatty v0.0.20 // indirect github.com/metalmatze/signal v0.0.0-20210307161603-1c9aa721a97a // indirect - github.com/miekg/dns v1.1.61 // indirect + github.com/miekg/dns v1.1.62 // indirect github.com/mitchellh/go-homedir v1.1.0 // indirect github.com/moby/docker-image-spec v1.3.1 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect @@ -184,7 +186,7 @@ require ( github.com/prometheus/client_model v0.6.1 // indirect github.com/prometheus/common/sigv4 v0.1.0 // indirect github.com/prometheus/procfs v0.15.1 // indirect - github.com/scaleway/scaleway-sdk-go v1.0.0-beta.29 // indirect + github.com/scaleway/scaleway-sdk-go v1.0.0-beta.30 // indirect github.com/sirupsen/logrus v1.9.3 // indirect github.com/spf13/cobra v1.8.1 // indirect github.com/stretchr/objx v0.5.2 // indirect @@ -197,32 +199,32 @@ require ( github.com/yusufpapurcu/wmi v1.2.3 // indirect go.mongodb.org/mongo-driver v1.14.0 // indirect go.opencensus.io v0.24.0 // indirect - go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect - go.opentelemetry.io/otel/trace v1.30.0 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 // indirect + go.opentelemetry.io/otel/trace v1.32.0 // indirect go.opentelemetry.io/proto/otlp v1.3.1 // indirect go.uber.org/atomic v1.11.0 // indirect golang.org/x/arch v0.8.0 // indirect - golang.org/x/crypto v0.27.0 // indirect - golang.org/x/exp v0.0.0-20240808152545-0cdaa3abc0fa // indirect - golang.org/x/mod v0.20.0 // indirect - golang.org/x/net v0.29.0 // indirect + golang.org/x/crypto v0.28.0 // indirect + golang.org/x/exp v0.0.0-20240909161429-701f63a606c0 // indirect + golang.org/x/mod v0.21.0 // indirect + golang.org/x/net v0.30.0 // indirect golang.org/x/oauth2 v0.23.0 // indirect - golang.org/x/sync v0.8.0 // indirect - golang.org/x/sys v0.25.0 // indirect - golang.org/x/term v0.24.0 // indirect - golang.org/x/text v0.18.0 // indirect + golang.org/x/sync v0.9.0 // indirect + golang.org/x/sys v0.27.0 // indirect + golang.org/x/term v0.25.0 // indirect + golang.org/x/text v0.20.0 // indirect golang.org/x/time v0.6.0 // indirect - golang.org/x/tools v0.24.0 // indirect + golang.org/x/tools v0.25.0 // indirect gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect - google.golang.org/api v0.188.0 // indirect - google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1 // indirect - google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 // indirect - google.golang.org/grpc v1.66.1 // indirect - google.golang.org/protobuf v1.34.2 // indirect + google.golang.org/api v0.198.0 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20241104194629-dd2ea8efbc28 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20241104194629-dd2ea8efbc28 // indirect + google.golang.org/grpc v1.67.1 // indirect + google.golang.org/protobuf v1.35.1 // indirect gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/ini.v1 v1.67.0 // indirect - k8s.io/kube-openapi v0.0.0-20240808142205-8e686545bdb8 // indirect + k8s.io/kube-openapi v0.0.0-20240903163716-9e1beecbcb38 // indirect sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect ) diff --git a/go.sum b/go.sum index f1069f8cf6..2827d64245 100644 --- a/go.sum +++ b/go.sum @@ -13,18 +13,18 @@ cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKV cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs= cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc= cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY= -cloud.google.com/go/auth v0.7.0 h1:kf/x9B3WTbBUHkC+1VS8wwwli9TzhSt0vSTVBmMR8Ts= -cloud.google.com/go/auth v0.7.0/go.mod h1:D+WqdrpcjmiCgWrXmLLxOVq1GACoE36chW6KXoEvuIw= -cloud.google.com/go/auth/oauth2adapt v0.2.2 h1:+TTV8aXpjeChS9M+aTtN/TjdQnzJvmzKFt//oWu7HX4= -cloud.google.com/go/auth/oauth2adapt v0.2.2/go.mod h1:wcYjgpZI9+Yu7LyYBg4pqSiaRkfEK3GQcpb7C/uyF1Q= +cloud.google.com/go/auth v0.9.4 h1:DxF7imbEbiFu9+zdKC6cKBko1e8XeJnipNqIbWZ+kDI= +cloud.google.com/go/auth v0.9.4/go.mod h1:SHia8n6//Ya940F1rLimhJCjjx7KE17t0ctFEci3HkA= +cloud.google.com/go/auth/oauth2adapt v0.2.4 h1:0GWE/FUsXhf6C+jAkWgYm7X9tK8cuEIfy19DBn6B6bY= +cloud.google.com/go/auth/oauth2adapt v0.2.4/go.mod h1:jC/jOpwFP6JBxhB3P5Rr0a9HLMC/Pe3eaL4NmdvqPtc= cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE= cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc= cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg= cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc= cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ= -cloud.google.com/go/compute/metadata v0.4.0 h1:vHzJCWaM4g8XIcm8kopr3XmDA4Gy/lblD3EhhSux05c= -cloud.google.com/go/compute/metadata v0.4.0/go.mod h1:SIQh1Kkb4ZJ8zJ874fqVkslA29PRXuleyj6vOzlbK7M= +cloud.google.com/go/compute/metadata v0.5.1 h1:NM6oZeZNlYjiwYje+sYFjEpP0Q0zCan1bmQW/KmIrGs= +cloud.google.com/go/compute/metadata v0.5.1/go.mod h1:C66sj2AluDcIqakBq/M8lw8/ybHgOZqin2obFxa/E5k= cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= @@ -39,8 +39,8 @@ cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9 dario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s= dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= -github.com/Azure/azure-sdk-for-go/sdk/azcore v1.13.0 h1:GJHeeA2N7xrG3q30L2UXDyuWRzDM900/65j70wcM4Ww= -github.com/Azure/azure-sdk-for-go/sdk/azcore v1.13.0/go.mod h1:l38EPgmsp71HHLq9j7De57JcKOWPyhrsW1Awm1JS6K0= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.14.0 h1:nyQWyZvwGTvunIMxi1Y9uXkcyr+I7TeNrr/foo4Kpk8= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.14.0/go.mod h1:l38EPgmsp71HHLq9j7De57JcKOWPyhrsW1Awm1JS6K0= github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0 h1:tfLQ34V6F7tVSwoTf/4lH5sE0o6eCJuNDTmH09nDpbc= github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0/go.mod h1:9kIvujWAA58nmPmWB1m23fyWic1kYZMxD9CxaWn4Qpg= github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY= @@ -63,8 +63,8 @@ github.com/Code-Hex/go-generics-cache v1.5.1 h1:6vhZGc5M7Y/YD8cIUcY8kcuQLB4cHR7U github.com/Code-Hex/go-generics-cache v1.5.1/go.mod h1:qxcC9kRVrct9rHeiYpFWSoW1vxyillCVzX13KZG8dl4= github.com/DATA-DOG/go-sqlmock v1.4.1/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM= github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ= -github.com/Masterminds/semver/v3 v3.3.0 h1:B8LGeaivUe71a5qox1ICM/JLl0NqZSW5CHyL+hmvYS0= -github.com/Masterminds/semver/v3 v3.3.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/Masterminds/semver/v3 v3.3.1 h1:QtNSWtVZ3nBfk8mAOu/B6v7FMJ+NHTIgUPi7rj+4nv4= +github.com/Masterminds/semver/v3 v3.3.1/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow= github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM= github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= @@ -83,8 +83,8 @@ github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgI github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 h1:DklsrG3dyBCFEj5IhUbnKptjxatkF07cF2ak3yi77so= github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw= github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= -github.com/aws/aws-sdk-go v1.54.19 h1:tyWV+07jagrNiCcGRzRhdtVjQs7Vy41NwsuOcl0IbVI= -github.com/aws/aws-sdk-go v1.54.19/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU= +github.com/aws/aws-sdk-go v1.55.5 h1:KKUZBfBoyqy5d3swXyiC7Q76ic40rYcbqH7qjh59kzU= +github.com/aws/aws-sdk-go v1.55.5/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU= github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3 h1:6df1vn4bBlDDo4tARvBm7l6KA9iVMnE3NWizDeWSrps= github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3/go.mod h1:CIWtjkly68+yqLPbvwwR/fjNJA/idrtULjZWh2v1ys0= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= @@ -103,6 +103,8 @@ github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4 github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8= github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= +github.com/cert-manager/cert-manager v1.16.2 h1:c9UU2E+8XWGruyvC/mdpc1wuLddtgmNr8foKdP7a8Jg= +github.com/cert-manager/cert-manager v1.16.2/go.mod h1:MfLVTL45hFZsqmaT1O0+b2ugaNNQQZttSFV9hASHUb0= github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= @@ -117,8 +119,8 @@ github.com/cloudwego/base64x v0.1.4/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJ github.com/cloudwego/iasm v0.2.0 h1:1KNIy1I1H9hNNFEEH3DVnI4UujN+1zjpuk6gwHLTssg= github.com/cloudwego/iasm v0.2.0/go.mod h1:8rXZaNYT2n95jn+zTI1sDr+IgcD2GVs0nlbbQPiEFhY= github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= -github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b h1:ga8SEFjZ60pxLcmhnThWgvH2wg8376yUJmPhEH4H3kw= -github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8= +github.com/cncf/xds/go v0.0.0-20240723142845-024c85f92f20 h1:N+3sFI5GUjRKBi+i0TxYVST9h4Ie192jJWpHvthBBgg= +github.com/cncf/xds/go v0.0.0-20240723142845-024c85f92f20/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8= github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I= github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo= github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= @@ -128,14 +130,14 @@ github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1 github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/dennwc/varint v1.0.0 h1:kGNFFSSw8ToIy3obO/kKr8U9GZYUAxQEVuix4zfDWzE= github.com/dennwc/varint v1.0.0/go.mod h1:hnItb35rvZvJrbTALZtY/iQfDs48JKRG1RPpgziApxA= -github.com/digitalocean/godo v1.118.0 h1:lkzGFQmACrVCp7UqH1sAi4JK/PWwlc5aaxubgorKmC4= -github.com/digitalocean/godo v1.118.0/go.mod h1:Vk0vpCot2HOAJwc5WE8wljZGtJ3ZtWIc8MQ8rF38sdo= +github.com/digitalocean/godo v1.125.0 h1:wGPBQRX9Wjo0qCF0o8d25mT3A84Iw8rfHnZOPyvHcMQ= +github.com/digitalocean/godo v1.125.0/go.mod h1:PU8JB6I1XYkQIdHFop8lLAY9ojp6M0XcU0TWaQSxbrc= github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk= github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E= github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI= github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ= -github.com/docker/docker v27.1.1+incompatible h1:hO/M4MtV36kzKldqnA37IWhebRA+LnqqcqDja6kVaKY= -github.com/docker/docker v27.1.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= +github.com/docker/docker v27.2.0+incompatible h1:Rk9nIVdfH3+Vz4cyI/uhbINhEZ/oLmc+CBXmH6fbNk4= +github.com/docker/docker v27.2.0+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ= github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec= github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4= @@ -149,11 +151,11 @@ github.com/emicklei/go-restful/v3 v3.12.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRr github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= -github.com/envoyproxy/go-control-plane v0.12.1-0.20240621013728-1eb8caab5155 h1:IgJPqnrlY2Mr4pYB6oaMKvFvwJ9H+X6CCY5x1vCTcpc= -github.com/envoyproxy/go-control-plane v0.12.1-0.20240621013728-1eb8caab5155/go.mod h1:5Wkq+JduFtdAXihLmeTJf+tRYIT4KBc2vPXDhwVo1pA= +github.com/envoyproxy/go-control-plane v0.13.0 h1:HzkeUz1Knt+3bK+8LG1bxOO/jzWZmdxpwC51i202les= +github.com/envoyproxy/go-control-plane v0.13.0/go.mod h1:GRaKG3dwvFoTg4nj7aXdZnvMg4d7nvT/wl9WgVXn3Q8= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/envoyproxy/protoc-gen-validate v1.0.4 h1:gVPz/FMfvh57HdSJQyvBtF00j8JU4zdyUgIUNhlgg0A= -github.com/envoyproxy/protoc-gen-validate v1.0.4/go.mod h1:qys6tmnRsYrQqIhm2bvKZH4Blx/1gTIZ2UKVY1M+Yew= +github.com/envoyproxy/protoc-gen-validate v1.1.0 h1:tntQDh69XqOCOZsDz0lVJQez/2L6Uu2PdjCQwWCJ3bM= +github.com/envoyproxy/protoc-gen-validate v1.1.0/go.mod h1:sXRDRVmzEbkM7CVcM06s9shE/m23dg3wzjl0UWqJ2q4= github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg= @@ -311,27 +313,27 @@ github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hf github.com/google/pprof v0.0.0-20240727154555-813a5fbdbec8 h1:FKHo8hFI3A+7w0aUQuYXQ+6EN5stWmeY/AZqtM8xk9k= github.com/google/pprof v0.0.0-20240727154555-813a5fbdbec8/go.mod h1:K1liHPHnj73Fdn/EKuT8nrFqBihUSKXoLYU0BuatOYo= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= -github.com/google/s2a-go v0.1.7 h1:60BLSyTrOV4/haCDW4zb1guZItoSq8foHCXrAnjBo/o= -github.com/google/s2a-go v0.1.7/go.mod h1:50CgR4k1jNlWBu4UfS4AcfhVe1r6pdZPygJ3R8F0Qdw= +github.com/google/s2a-go v0.1.8 h1:zZDs9gcbt9ZPLV0ndSyQk6Kacx2g/X+SKYovpnz3SMM= +github.com/google/s2a-go v0.1.8/go.mod h1:6iNWHTpQ+nfNRN5E00MSdfDwVesa8hhS32PhPO8deJA= github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/googleapis/enterprise-certificate-proxy v0.3.2 h1:Vie5ybvEvT75RniqhfFxPRy3Bf7vr3h0cechB90XaQs= -github.com/googleapis/enterprise-certificate-proxy v0.3.2/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0= +github.com/googleapis/enterprise-certificate-proxy v0.3.4 h1:XYIDZApgAnrN1c855gTgghdIA6Stxb52D5RnLI1SLyw= +github.com/googleapis/enterprise-certificate-proxy v0.3.4/go.mod h1:YKe7cfqYXjKGpGvmSg28/fFvhNzinZQm8DGnaburhGA= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= -github.com/googleapis/gax-go/v2 v2.12.5 h1:8gw9KZK8TiVKB6q3zHY3SBzLnrGp6HQjyfYBYGmXdxA= -github.com/googleapis/gax-go/v2 v2.12.5/go.mod h1:BUDKcWo+RaKq5SC9vVYL0wLADa3VcfswbOMMRmB9H3E= -github.com/gophercloud/gophercloud v1.13.0 h1:8iY9d1DAbzMW6Vok1AxbbK5ZaUjzMp0tdyt4fX9IeJ0= -github.com/gophercloud/gophercloud v1.13.0/go.mod h1:aAVqcocTSXh2vYFZ1JTvx4EQmfgzxRcNupUfxZbBNDM= +github.com/googleapis/gax-go/v2 v2.13.0 h1:yitjD5f7jQHhyDsnhKEBU52NdvvdSeGzlAnDPT0hH1s= +github.com/googleapis/gax-go/v2 v2.13.0/go.mod h1:Z/fvTZXF8/uw7Xu5GuslPw+bplx6SS338j1Is2S+B7A= +github.com/gophercloud/gophercloud v1.14.0 h1:Bt9zQDhPrbd4qX7EILGmy+i7GP35cc+AAL2+wIJpUE8= +github.com/gophercloud/gophercloud v1.14.0/go.mod h1:aAVqcocTSXh2vYFZ1JTvx4EQmfgzxRcNupUfxZbBNDM= github.com/gorilla/websocket v1.5.1 h1:gmztn0JnHVt9JZquRuzLw3g4wouNVzKL15iLr/zn/QY= github.com/gorilla/websocket v1.5.1/go.mod h1:x3kM2JMyaluk02fnUJpQuwD2dCS5NDG2ZHL0uE0tcaY= github.com/grafana/regexp v0.0.0-20240518133315-a468a5bfb3bc h1:GN2Lv3MGO7AS6PrRoT6yV5+wkrOpcszoIsO4+4ds248= github.com/grafana/regexp v0.0.0-20240518133315-a468a5bfb3bc/go.mod h1:+JKpmjMGhpgPL+rXZ5nsZieVzvarn86asRlBg4uNGnk= -github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 h1:asbCHRVmodnJTuQ3qamDwqVOIjwqUPTYmYuemVOx+Ys= -github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0/go.mod h1:ggCgvZ2r7uOoQjOyu2Y1NhHmEPPzzuhWgcza5M1Ji1I= -github.com/hashicorp/consul/api v1.29.2 h1:aYyRn8EdE2mSfG14S1+L9Qkjtz8RzmaWh6AcNGRNwPw= -github.com/hashicorp/consul/api v1.29.2/go.mod h1:0YObcaLNDSbtlgzIRtmRXI1ZkeuK0trCBxwZQ4MYnIk= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.23.0 h1:ad0vkEBuk23VJzZR9nkLVG0YAoN9coASF1GusYX6AlU= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.23.0/go.mod h1:igFoXX2ELCW06bol23DWPB5BEWfZISOzSP5K2sbLea0= +github.com/hashicorp/consul/api v1.29.4 h1:P6slzxDLBOxUSj3fWo2o65VuKtbtOXFi7TSSgtXutuE= +github.com/hashicorp/consul/api v1.29.4/go.mod h1:HUlfw+l2Zy68ceJavv2zAyArl2fqhGWnMycyt56sBgg= github.com/hashicorp/consul/proto-public v0.6.2 h1:+DA/3g/IiKlJZb88NBn0ZgXrxJp2NlvCZdEyl+qxvL0= github.com/hashicorp/consul/proto-public v0.6.2/go.mod h1:cXXbOg74KBNGajC+o8RlA502Esf0R9prcoJgiOX/2Tg= github.com/hashicorp/consul/sdk v0.16.1 h1:V8TxTnImoPD5cj0U9Spl0TUxcytjcbbJeADFF07KdHg= @@ -383,19 +385,20 @@ github.com/hashicorp/nomad/api v0.0.0-20240717122358-3d93bd3778f3 h1:fgVfQ4AC1av github.com/hashicorp/nomad/api v0.0.0-20240717122358-3d93bd3778f3/go.mod h1:svtxn6QnrQ69P23VvIWMR34tg3vmwLz4UdUzm1dSCgE= github.com/hashicorp/serf v0.10.1 h1:Z1H2J60yRKvfDYAOZLd2MU0ND4AH/WDz7xYHDWQsIPY= github.com/hashicorp/serf v0.10.1/go.mod h1:yL2t6BqATOLGc5HF7qbFkTfXoPIY0WZdWHfEvMqbG+4= -github.com/hetznercloud/hcloud-go/v2 v2.10.2 h1:9gyTUPhfNbfbS40Spgij5mV5k37bOZgt8iHKCbfGs5I= -github.com/hetznercloud/hcloud-go/v2 v2.10.2/go.mod h1:xQ+8KhIS62W0D78Dpi57jsufWh844gUw1az5OUvaeq8= +github.com/hetznercloud/hcloud-go/v2 v2.13.1 h1:jq0GP4QaYE5d8xR/Zw17s9qoaESRJMXfGmtD1a/qckQ= +github.com/hetznercloud/hcloud-go/v2 v2.13.1/go.mod h1:dhix40Br3fDiBhwaSG/zgaYOFFddpfBm/6R1Zz0IiF0= github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4= github.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= -github.com/ionos-cloud/sdk-go/v6 v6.1.11 h1:J/uRN4UWO3wCyGOeDdMKv8LWRzKu6UIkLEaes38Kzh8= -github.com/ionos-cloud/sdk-go/v6 v6.1.11/go.mod h1:EzEgRIDxBELvfoa/uBN0kOQaqovLjUWEB7iW4/Q+t4k= +github.com/ionos-cloud/sdk-go/v6 v6.2.1 h1:mxxN+frNVmbFrmmFfXnBC3g2USYJrl6mc1LW2iNYbFY= +github.com/ionos-cloud/sdk-go/v6 v6.2.1/go.mod h1:SXrO9OGyWjd2rZhAhEpdYN6VUAODzzqRdqA9BCviQtI= github.com/jarcoal/httpmock v1.3.1 h1:iUx3whfZWVf3jT01hQTO/Eo5sAYtB2/rqaUuOtpInww= github.com/jarcoal/httpmock v1.3.1/go.mod h1:3yb8rc4BI7TCBhFY8ng0gjuLKJNquuDNiPaZjnENuYg= -github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= +github.com/jmespath/go-jmespath v0.4.1-0.20220621161143-b0104c826a24 h1:liMMTbpW34dhU4az1GN0pTPADwNmvoRSeoZ6PItiqnY= +github.com/jmespath/go-jmespath v0.4.1-0.20220621161143-b0104c826a24/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= @@ -436,8 +439,8 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0 github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ= github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI= -github.com/linode/linodego v1.37.0 h1:B/2Spzv9jYXzKA+p+GD8fVCNJ7Wuw6P91ZDD9eCkkso= -github.com/linode/linodego v1.37.0/go.mod h1:L7GXKFD3PoN2xSEtFc04wIXP5WK65O10jYQx0PQISWQ= +github.com/linode/linodego v1.40.0 h1:7ESY0PwK94hoggoCtIroT1Xk6b1flrFBNZ6KwqbTqlI= +github.com/linode/linodego v1.40.0/go.mod h1:NsUw4l8QrLdIofRg1NYFBbW5ZERnmbZykVBszPZLORM= github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0= github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= @@ -462,8 +465,8 @@ github.com/metalmatze/signal v0.0.0-20210307161603-1c9aa721a97a h1:0usWxe5SGXKQo github.com/metalmatze/signal v0.0.0-20210307161603-1c9aa721a97a/go.mod h1:3OETvrxfELvGsU2RoGGWercfeZ4bCL3+SOwzIWtJH/Q= github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso= github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI= -github.com/miekg/dns v1.1.61 h1:nLxbwF3XxhwVSm8g9Dghm9MHPaUZuqhPiGL+675ZmEs= -github.com/miekg/dns v1.1.61/go.mod h1:mnAarhS3nWaW+NVP2wTkYVIZyHNJ098SJZUki3eykwQ= +github.com/miekg/dns v1.1.62 h1:cN8OuEF1/x5Rq6Np+h1epln8OiyPWV+lROx9LxcGgIQ= +github.com/miekg/dns v1.1.62/go.mod h1:mvDlcItzm+br7MToIKqkglaGhlFMHJ9DTNNWONWXbNQ= github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI= github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= @@ -534,8 +537,8 @@ github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndr github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s= github.com/prometheus-community/prom-label-proxy v0.11.0 h1:IO02WiiFMfcIqvjhwMbCYnDJiTNcSHBrkCGRQ/7KDd0= github.com/prometheus-community/prom-label-proxy v0.11.0/go.mod h1:lfvrG70XqsxWDrSh1843QXBG0fSg8EbIXmAo8xGsvw8= -github.com/prometheus-operator/prometheus-operator v0.76.0 h1:EjGJiQVF3BUy/iygeRlN6iMBIAySMGZobEm7+7A95pI= -github.com/prometheus-operator/prometheus-operator v0.76.0/go.mod h1:y4PxsSBsOBwK1vXIw9U8DGLi8EptquItyP2IpqUtTGs= +github.com/prometheus-operator/prometheus-operator v0.76.2 h1:B+UcRc7py+zpow2H+q2V8sPF3jmsQNreJujBt36wZ+Q= +github.com/prometheus-operator/prometheus-operator v0.76.2/go.mod h1:g8uevau0bHz6HcqFW/hDbhmrgdQsmZBpGV/aKOSj+XI= github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.76.2 h1:BpGDC87A2SaxbKgONsFLEX3kRcRJee2aLQbjXsuz0hA= github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.76.2/go.mod h1:Rd8YnCqz+2FYsiGmE2DMlaLjQRB4v2jFNnzCt9YY4IM= github.com/prometheus-operator/prometheus-operator/pkg/client v0.76.2 h1:yncs8NglhE3hB+viNsabCAF9TBBDOBljHUyxHC5fSGY= @@ -548,8 +551,8 @@ github.com/prometheus/client_golang v1.4.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3O github.com/prometheus/client_golang v1.5.1/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU= github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0= -github.com/prometheus/client_golang v1.20.4 h1:Tgh3Yr67PaOv/uTqloMsCEdeuFTatm5zIq5+qNN23vI= -github.com/prometheus/client_golang v1.20.4/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE= +github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y= +github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= @@ -561,8 +564,8 @@ github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8b github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc= github.com/prometheus/common v0.29.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls= -github.com/prometheus/common v0.60.0 h1:+V9PAREWNvJMAuJ1x1BaWl9dewMW4YrHZQbx0sJNllA= -github.com/prometheus/common v0.60.0/go.mod h1:h0LYf1R1deLSKtD4Vdg8gy4RuOvENW2J/h19V5NADQw= +github.com/prometheus/common v0.60.1 h1:FUas6GcOw66yB/73KC+BOZoFJmbo/1pojoILArPAaSc= +github.com/prometheus/common v0.60.1/go.mod h1:h0LYf1R1deLSKtD4Vdg8gy4RuOvENW2J/h19V5NADQw= github.com/prometheus/common/sigv4 v0.1.0 h1:qoVebwtwwEhS85Czm2dSROY5fTo2PAPEVdDeppTwGX4= github.com/prometheus/common/sigv4 v0.1.0/go.mod h1:2Jkxxk9yYvCkE5G1sQT7GuEXm57JrvHu9k5YwTjsNtI= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= @@ -572,15 +575,15 @@ github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4O github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc= github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk= -github.com/prometheus/prometheus v0.54.1 h1:vKuwQNjnYN2/mDoWfHXDhAsz/68q/dQDb+YbcEqU7MQ= -github.com/prometheus/prometheus v0.54.1/go.mod h1:xlLByHhk2g3ycakQGrMaU8K7OySZx98BzeCR99991NY= +github.com/prometheus/prometheus v0.55.1 h1:+NM9V/h4A+wRkOyQzGewzgPPgq/iX2LUQoISNvmjZmI= +github.com/prometheus/prometheus v0.55.1/go.mod h1:GGS7QlWKCqCbcEzWsVahYIfQwiGhcExkarHyLJTsv6I= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= -github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8= -github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4= +github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII= +github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= -github.com/scaleway/scaleway-sdk-go v1.0.0-beta.29 h1:BkTk4gynLjguayxrYxZoMZjBnAOh7ntQvUkOFmkMqPU= -github.com/scaleway/scaleway-sdk-go v1.0.0-beta.29/go.mod h1:fCa7OJZ/9DRTnOKmxvT6pn+LPWUptQAmHF/SBJUGEcg= +github.com/scaleway/scaleway-sdk-go v1.0.0-beta.30 h1:yoKAVkEVwAqbGbR8n87rHQ1dulL25rKloGadb3vm770= +github.com/scaleway/scaleway-sdk-go v1.0.0-beta.30/go.mod h1:sH0u6fq6x4R5M7WxkoQFY/o7UaiItec0o1LinLCJNq8= github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I= github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc= github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI= @@ -612,8 +615,9 @@ github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1F github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= -github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= +github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/tklauser/go-sysconf v0.3.13 h1:GBUpcahXSpR2xN01jhkNAbTLRk2Yzgggk8IM08lq3r4= github.com/tklauser/go-sysconf v0.3.13/go.mod h1:zwleP4Q4OehZHGn4CYZDipCgg9usW5IJePewFCGVEa0= github.com/tklauser/numcpus v0.7.0 h1:yjuerZP127QG9m5Zh/mSO4wqurYil27tHrqwRoRjpr4= @@ -643,28 +647,28 @@ go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= -go.opentelemetry.io/collector/featuregate v1.17.0 h1:vpfXyWe7DFqCsDArsR9rAKKtVpt72PKjzjeqPegViws= -go.opentelemetry.io/collector/featuregate v1.17.0/go.mod h1:47xrISO71vJ83LSMm8+yIDsUbKktUp48Ovt7RR6VbRs= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 h1:4K4tsIXefpVJtvA/8srF4V4y0akAoPHkIslgAkjixJA= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0/go.mod h1:jjdQuTGVsXV4vSs+CJ2qYDeDPf9yIJV23qlIzBm73Vg= -go.opentelemetry.io/otel v1.30.0 h1:F2t8sK4qf1fAmY9ua4ohFS/K+FUuOPemHUIXHtktrts= -go.opentelemetry.io/otel v1.30.0/go.mod h1:tFw4Br9b7fOS+uEao81PJjVMjW/5fvNCbpsDIXqP0pc= -go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.30.0 h1:VrMAbeJz4gnVDg2zEzjHG4dEH86j4jO6VYB+NgtGD8s= -go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.30.0/go.mod h1:qqN/uFdpeitTvm+JDqqnjm517pmQRYxTORbETHq5tOc= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 h1:3Q/xZUyC1BBkualc9ROb4G8qkH90LXEIICcs5zv1OYY= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0/go.mod h1:s75jGIWA9OfCMzF0xr+ZgfrB5FEbbV7UuYo32ahUiFI= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.28.0 h1:j9+03ymgYhPKmeXGk5Zu+cIZOlVzd9Zv7QIiyItjFBU= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.28.0/go.mod h1:Y5+XiUG4Emn1hTfciPzGPJaSI+RpDts6BnCIir0SLqk= -go.opentelemetry.io/otel/exporters/prometheus v0.52.0 h1:kmU3H0b9ufFSi8IQCcxack+sWUblKkFbqWYs6YiACGQ= -go.opentelemetry.io/otel/exporters/prometheus v0.52.0/go.mod h1:+wsAp2+JhuGXX7YRkjlkx6hyWY3ogFPfNA4x3nyiAh0= -go.opentelemetry.io/otel/metric v1.30.0 h1:4xNulvn9gjzo4hjg+wzIKG7iNFEaBMX00Qd4QIZs7+w= -go.opentelemetry.io/otel/metric v1.30.0/go.mod h1:aXTfST94tswhWEb+5QjlSqG+cZlmyXy/u8jFpor3WqQ= -go.opentelemetry.io/otel/sdk v1.30.0 h1:cHdik6irO49R5IysVhdn8oaiR9m8XluDaJAs4DfOrYE= -go.opentelemetry.io/otel/sdk v1.30.0/go.mod h1:p14X4Ok8S+sygzblytT1nqG98QG2KYKv++HE0LY/mhg= -go.opentelemetry.io/otel/sdk/metric v1.30.0 h1:QJLT8Pe11jyHBHfSAgYH7kEmT24eX792jZO1bo4BXkM= -go.opentelemetry.io/otel/sdk/metric v1.30.0/go.mod h1:waS6P3YqFNzeP01kuo/MBBYqaoBJl7efRQHOaydhy1Y= -go.opentelemetry.io/otel/trace v1.30.0 h1:7UBkkYzeg3C7kQX8VAidWh2biiQbtAKjyIML8dQ9wmc= -go.opentelemetry.io/otel/trace v1.30.0/go.mod h1:5EyKqTzzmyqB9bwtCCq6pDLktPK6fmGf/Dph+8VI02o= +go.opentelemetry.io/collector/featuregate v1.20.0 h1:Mi7nMy/q52eruI+6jWnMKUOeM55XvwoPnGcdB1++O8c= +go.opentelemetry.io/collector/featuregate v1.20.0/go.mod h1:47xrISO71vJ83LSMm8+yIDsUbKktUp48Ovt7RR6VbRs= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 h1:TT4fX+nBOA/+LUkobKGW1ydGcn+G3vRw9+g5HwCphpk= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0/go.mod h1:L7UH0GbB0p47T4Rri3uHjbpCFYrVrwc1I25QhNPiGK8= +go.opentelemetry.io/otel v1.32.0 h1:WnBN+Xjcteh0zdk01SVqV55d/m62NJLJdIyb4y/WO5U= +go.opentelemetry.io/otel v1.32.0/go.mod h1:00DCVSB0RQcnzlwyTfqtxSm+DRr9hpYrHjNGiBHVQIg= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.32.0 h1:t/Qur3vKSkUCcDVaSumWF2PKHt85pc7fRvFuoVT8qFU= +go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.32.0/go.mod h1:Rl61tySSdcOJWoEgYZVtmnKdA0GeKrSqkHC1t+91CH8= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.29.0 h1:dIIDULZJpgdiHz5tXrTgKIMLkus6jEFa7x5SOKcyR7E= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.29.0/go.mod h1:jlRVBe7+Z1wyxFSUs48L6OBQZ5JwH2Hg/Vbl+t9rAgI= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.29.0 h1:JAv0Jwtl01UFiyWZEMiJZBiTlv5A50zNs8lsthXqIio= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.29.0/go.mod h1:QNKLmUEAq2QUbPQUfvw4fmv0bgbK7UlOSFCnXyfvSNc= +go.opentelemetry.io/otel/exporters/prometheus v0.54.0 h1:rFwzp68QMgtzu9PgP3jm9XaMICI6TsofWWPcBDKwlsU= +go.opentelemetry.io/otel/exporters/prometheus v0.54.0/go.mod h1:QyjcV9qDP6VeK5qPyKETvNjmaaEc7+gqjh4SS0ZYzDU= +go.opentelemetry.io/otel/metric v1.32.0 h1:xV2umtmNcThh2/a/aCP+h64Xx5wsj8qqnkYZktzNa0M= +go.opentelemetry.io/otel/metric v1.32.0/go.mod h1:jH7CIbbK6SH2V2wE16W05BHCtIDzauciCRLoc/SyMv8= +go.opentelemetry.io/otel/sdk v1.32.0 h1:RNxepc9vK59A8XsgZQouW8ue8Gkb4jpWtJm9ge5lEG4= +go.opentelemetry.io/otel/sdk v1.32.0/go.mod h1:LqgegDBjKMmb2GC6/PrTnteJG39I8/vJCAP9LlJXEjU= +go.opentelemetry.io/otel/sdk/metric v1.32.0 h1:rZvFnvmvawYb0alrYkjraqJq0Z4ZUJAiyYCU9snn1CU= +go.opentelemetry.io/otel/sdk/metric v1.32.0/go.mod h1:PWeZlq0zt9YkYAp3gjKZ0eicRYvOh1Gd+X99x6GHpCQ= +go.opentelemetry.io/otel/trace v1.32.0 h1:WIC9mYrXf8TmY/EXuULKc8hR17vE+Hjv2cssQDe03fM= +go.opentelemetry.io/otel/trace v1.32.0/go.mod h1:+i4rkvCraA+tG6AzwloGaCtkx53Fa+L+V8e9a7YvhT8= go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0= go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8= go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE= @@ -689,8 +693,8 @@ golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5y golang.org/x/crypto v0.0.0-20220829220503-c86fa9a7ed90/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8= -golang.org/x/crypto v0.27.0 h1:GXm2NjJrPaiv/h1tb2UH8QfgC/hOf/+z0p6PT8o1w7A= -golang.org/x/crypto v0.27.0/go.mod h1:1Xngt8kV6Dvbssa53Ziq6Eqn0HqbZi5Z6R0ZpwQzt70= +golang.org/x/crypto v0.28.0 h1:GBDwsMXVQi34v5CCYUm2jkJvu4cbtru2U4TN2PSyQnw= +golang.org/x/crypto v0.28.0/go.mod h1:rmgy+3RHxRZMyY0jjAJShp2zgEdOqj2AO7U0pYmeQ7U= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= @@ -701,8 +705,8 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0 golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM= golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU= -golang.org/x/exp v0.0.0-20240808152545-0cdaa3abc0fa h1:ELnwvuAXPNtPk1TJRuGkI9fDTwym6AYBu0qzT8AcHdI= -golang.org/x/exp v0.0.0-20240808152545-0cdaa3abc0fa/go.mod h1:akd2r19cwCdwSwWeIdzYQGa/EZZyqcOdwWiwj5L5eKQ= +golang.org/x/exp v0.0.0-20240909161429-701f63a606c0 h1:e66Fs6Z+fZTbFBAxKfP3PALWBtpfqks2bwGcexMxgtk= +golang.org/x/exp v0.0.0-20240909161429-701f63a606c0/go.mod h1:2TbTHSBQa924w8M6Xs1QcRcFwyucIwBGpK1p2f1YFFY= golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= @@ -725,8 +729,8 @@ golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/mod v0.20.0 h1:utOm6MM3R3dnawAiJgn0y+xvuYRsm1RKM/4giyfDgV0= -golang.org/x/mod v0.20.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.21.0 h1:vvrHzRwRfVKSiLrG+d4FMl/Qi4ukBCE6kZlTUkDYRT0= +golang.org/x/mod v0.21.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -767,8 +771,8 @@ golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44= golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM= -golang.org/x/net v0.29.0 h1:5ORfpBpCs4HzDYoodCDBbwHzdR5UrLBZ3sOnUJmFoHo= -golang.org/x/net v0.29.0/go.mod h1:gLkgy8jTGERgjzMic6DS9+SP0ajcu6Xu3Orq/SpETg0= +golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4= +golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -790,8 +794,8 @@ golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ= -golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/sync v0.9.0 h1:fEo0HyrW1GIgZdpbhCRO0PkJajUS5H9IFUztCgEo2jQ= +golang.org/x/sync v0.9.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -852,16 +856,16 @@ golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/sys v0.25.0 h1:r+8e+loiHxRqhXVl6ML1nO3l1+oFoWbnlu2Ehimmi34= -golang.org/x/sys v0.25.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.27.0 h1:wBqf8DvsY9Y/2P8gAfPDEYNuS30J4lPHJxXSb/nJZ+s= +golang.org/x/sys v0.27.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY= -golang.org/x/term v0.24.0 h1:Mh5cbb+Zk2hqqXNO7S1iTjEphVL+jb8ZWaqh/g+JWkM= -golang.org/x/term v0.24.0/go.mod h1:lOBK/LVxemqiMij05LGJ0tzNr8xlmwBRJ81PX6wVLH8= +golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24= +golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -873,8 +877,8 @@ golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= -golang.org/x/text v0.18.0 h1:XvMDiNzPAl0jr17s6W9lcaIhGUfUORdGCNsuLmPG224= -golang.org/x/text v0.18.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY= +golang.org/x/text v0.20.0 h1:gK/Kv2otX8gz+wn7Rmb3vT96ZwuoxnQlY+HlJVj7Qug= +golang.org/x/text v0.20.0/go.mod h1:D4IsuqiFMhST5bX19pQ9ikHC2GsaKyk/oF+pn3ducp4= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= @@ -926,8 +930,8 @@ golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= -golang.org/x/tools v0.24.0 h1:J1shsA93PJUEVaUSaay7UXAyE8aimq3GW0pjlolpa24= -golang.org/x/tools v0.24.0/go.mod h1:YhNqVBIfWHdzvTLs0d8LCuMhkKUgSUKldakyV7W/WDQ= +golang.org/x/tools v0.25.0 h1:oFU9pkj/iJgs+0DT+VMHrx+oBKs/LJMV+Uvg78sl+fE= +golang.org/x/tools v0.25.0/go.mod h1:/vtpO8WL1N9cQC3FN5zPqb//fRXskFHbLKk4OW1Q7rg= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -950,8 +954,8 @@ google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0M google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE= google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM= google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc= -google.golang.org/api v0.188.0 h1:51y8fJ/b1AaaBRJr4yWm96fPcuxSo0JcegXE3DaHQHw= -google.golang.org/api v0.188.0/go.mod h1:VR0d+2SIiWOYG3r/jdm7adPW9hI2aRv9ETOSCQ9Beag= +google.golang.org/api v0.198.0 h1:OOH5fZatk57iN0A7tjJQzt6aPfYQ1JiWkt1yGseazks= +google.golang.org/api v0.198.0/go.mod h1:/Lblzl3/Xqqk9hw/yS97TImKTUwnf1bv89v7+OagJzc= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= @@ -987,10 +991,10 @@ google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7Fc google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1 h1:hjSy6tcFQZ171igDaN5QHOw2n6vx40juYbC/x67CEhc= -google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:qpvKtACPCQhAdu3PyQgV4l3LMXZEtft7y8QcarRsp9I= -google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 h1:pPJltXNxVzT4pK9yD8vR9X75DaWYYmLGMsEvBfFQZzQ= -google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU= +google.golang.org/genproto/googleapis/api v0.0.0-20241104194629-dd2ea8efbc28 h1:M0KvPgPmDZHPlbRbaNU1APr28TvwvvdUPlSv7PUvy8g= +google.golang.org/genproto/googleapis/api v0.0.0-20241104194629-dd2ea8efbc28/go.mod h1:dguCy7UOdZhTvLzDyt15+rOrawrpM4q7DD9dQ1P11P4= +google.golang.org/genproto/googleapis/rpc v0.0.0-20241104194629-dd2ea8efbc28 h1:XVhgTWWV3kGQlwJHR3upFWZeTsei6Oks1apkZSeonIE= +google.golang.org/genproto/googleapis/rpc v0.0.0-20241104194629-dd2ea8efbc28/go.mod h1:GX3210XPVPUjJbTUbvwI8f2IpZDMZuPJWDzDuebbviI= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= @@ -1004,8 +1008,8 @@ google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3Iji google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc= -google.golang.org/grpc v1.66.1 h1:hO5qAXR19+/Z44hmvIM4dQFMSYX9XcWsByfoxutBpAM= -google.golang.org/grpc v1.66.1/go.mod h1:s3/l6xSSCURdVfAnL+TqCNMyTDAGN6+lZeVxnZR128Y= +google.golang.org/grpc v1.67.1 h1:zWnc1Vrcno+lHZCOofnIMvycFcc0QRGIzm9dhnDX68E= +google.golang.org/grpc v1.67.1/go.mod h1:1gLDyUQU7CTLJI90u3nXZ9ekeghjeM7pTDZlqFNg2AA= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= @@ -1017,8 +1021,8 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4= google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= -google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg= -google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw= +google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA= +google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= @@ -1052,31 +1056,33 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= -k8s.io/api v0.31.1 h1:Xe1hX/fPW3PXYYv8BlozYqw63ytA92snr96zMW9gWTU= -k8s.io/api v0.31.1/go.mod h1:sbN1g6eY6XVLeqNsZGLnI5FwVseTrZX7Fv3O26rhAaI= -k8s.io/apiextensions-apiserver v0.31.1 h1:L+hwULvXx+nvTYX/MKM3kKMZyei+UiSXQWciX/N6E40= -k8s.io/apiextensions-apiserver v0.31.1/go.mod h1:tWMPR3sgW+jsl2xm9v7lAyRF1rYEK71i9G5dRtkknoQ= -k8s.io/apimachinery v0.31.1 h1:mhcUBbj7KUjaVhyXILglcVjuS4nYXiwC+KKFBgIVy7U= -k8s.io/apimachinery v0.31.1/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo= -k8s.io/client-go v0.31.1 h1:f0ugtWSbWpxHR7sjVpQwuvw9a3ZKLXX0u0itkFXufb0= -k8s.io/client-go v0.31.1/go.mod h1:sKI8871MJN2OyeqRlmA4W4KM9KBdBUpDLu/43eGemCg= -k8s.io/component-base v0.31.1 h1:UpOepcrX3rQ3ab5NB6g5iP0tvsgJWzxTyAo20sgYSy8= -k8s.io/component-base v0.31.1/go.mod h1:WGeaw7t/kTsqpVTaCoVEtillbqAhF2/JgvO0LDOMa0w= +k8s.io/api v0.31.3 h1:umzm5o8lFbdN/hIXbrK9oRpOproJO62CV1zqxXrLgk8= +k8s.io/api v0.31.3/go.mod h1:UJrkIp9pnMOI9K2nlL6vwpxRzzEX5sWgn8kGQe92kCE= +k8s.io/apiextensions-apiserver v0.31.3 h1:+GFGj2qFiU7rGCsA5o+p/rul1OQIq6oYpQw4+u+nciE= +k8s.io/apiextensions-apiserver v0.31.3/go.mod h1:2DSpFhUZZJmn/cr/RweH1cEVVbzFw9YBu4T+U3mf1e4= +k8s.io/apimachinery v0.31.3 h1:6l0WhcYgasZ/wk9ktLq5vLaoXJJr5ts6lkaQzgeYPq4= +k8s.io/apimachinery v0.31.3/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo= +k8s.io/client-go v0.31.3 h1:CAlZuM+PH2cm+86LOBemaJI/lQ5linJ6UFxKX/SoG+4= +k8s.io/client-go v0.31.3/go.mod h1:2CgjPUTpv3fE5dNygAr2NcM8nhHzXvxB8KL5gYc3kJs= +k8s.io/component-base v0.31.3 h1:DMCXXVx546Rfvhj+3cOm2EUxhS+EyztH423j+8sOwhQ= +k8s.io/component-base v0.31.3/go.mod h1:xME6BHfUOafRgT0rGVBGl7TuSg8Z9/deT7qq6w7qjIU= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= -k8s.io/kube-openapi v0.0.0-20240808142205-8e686545bdb8 h1:1Wof1cGQgA5pqgo8MxKPtf+qN6Sh/0JzznmeGPm1HnE= -k8s.io/kube-openapi v0.0.0-20240808142205-8e686545bdb8/go.mod h1:Os6V6dZwLNii3vxFpxcNaTmH8LJJBkOTg1N0tOA0fvA= -k8s.io/kubectl v0.31.1 h1:ih4JQJHxsEggFqDJEHSOdJ69ZxZftgeZvYo7M/cpp24= -k8s.io/kubectl v0.31.1/go.mod h1:aNuQoR43W6MLAtXQ/Bu4GDmoHlbhHKuyD49lmTC8eJM= -k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A= -k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +k8s.io/kube-openapi v0.0.0-20240903163716-9e1beecbcb38 h1:1dWzkmJrrprYvjGwh9kEUxmcUV/CtNU8QM7h1FLWQOo= +k8s.io/kube-openapi v0.0.0-20240903163716-9e1beecbcb38/go.mod h1:coRQXBK9NxO98XUv3ZD6AK3xzHCxV6+b7lrquKwaKzA= +k8s.io/kubectl v0.31.3 h1:3r111pCjPsvnR98oLLxDMwAeM6OPGmPty6gSKaLTQes= +k8s.io/kubectl v0.31.3/go.mod h1:lhMECDCbJN8He12qcKqs2QfmVo9Pue30geovBVpH5fs= +k8s.io/utils v0.0.0-20240921022957-49e7df575cb6 h1:MDF6h2H/h4tbzmtIKTuctcwZmY0tY9mD9fNT47QO6HI= +k8s.io/utils v0.0.0-20240921022957-49e7df575cb6/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= nullprogram.com/x/optparse v1.0.0/go.mod h1:KdyPE+Igbe0jQUrVfMqDMeJQIJZEuyV7pjYmp6pbG50= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= -sigs.k8s.io/controller-runtime v0.19.0 h1:nWVM7aq+Il2ABxwiCizrVDSlmDcshi9llbaFbC0ji/Q= -sigs.k8s.io/controller-runtime v0.19.0/go.mod h1:iRmWllt8IlaLjvTTDLhRBXIEtkCK6hwVBJJsYS9Ajf4= +sigs.k8s.io/controller-runtime v0.19.2 h1:3sPrF58XQEPzbE8T81TN6selQIMGbtYwuaJ6eDssDF8= +sigs.k8s.io/controller-runtime v0.19.2/go.mod h1:iRmWllt8IlaLjvTTDLhRBXIEtkCK6hwVBJJsYS9Ajf4= +sigs.k8s.io/gateway-api v1.1.0 h1:DsLDXCi6jR+Xz8/xd0Z1PYl2Pn0TyaFMOPPZIj4inDM= +sigs.k8s.io/gateway-api v1.1.0/go.mod h1:ZH4lHrL2sDi0FHZ9jjneb8kKnGzFWyrTya35sWUTrRs= sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo= sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4= diff --git a/internal/autodetect/autodetectutils/utils.go b/internal/autodetect/autodetectutils/utils.go new file mode 100644 index 0000000000..9bbf64357e --- /dev/null +++ b/internal/autodetect/autodetectutils/utils.go @@ -0,0 +1,47 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package autodetectutils + +import ( + "fmt" + "os" +) + +const ( + SA_ENV_VAR = "SERVICE_ACCOUNT_NAME" + NAMESPACE_ENV_VAR = "NAMESPACE" + NAMESPACE_FILE_PATH = "/var/run/secrets/kubernetes.io/serviceaccount/namespace" +) + +func GetOperatorNamespace() (string, error) { + namespace := os.Getenv(NAMESPACE_ENV_VAR) + if namespace != "" { + return namespace, nil + } + + nsBytes, err := os.ReadFile(NAMESPACE_FILE_PATH) + if err != nil { + return "", err + } + return string(nsBytes), nil +} + +func GetOperatorServiceAccount() (string, error) { + sa := os.Getenv(SA_ENV_VAR) + if sa == "" { + return sa, fmt.Errorf("%s env variable not found", SA_ENV_VAR) + } + return sa, nil +} diff --git a/internal/autodetect/certmanager/check.go b/internal/autodetect/certmanager/check.go new file mode 100644 index 0000000000..f4f58da623 --- /dev/null +++ b/internal/autodetect/certmanager/check.go @@ -0,0 +1,55 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package certmanager + +import ( + "context" + "fmt" + + rbacv1 "k8s.io/api/rbac/v1" + "sigs.k8s.io/controller-runtime/pkg/webhook/admission" + + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/autodetectutils" + rbac "github.com/open-telemetry/opentelemetry-operator/internal/rbac" +) + +// CheckCertManagerPermissions checks if the operator has the needed permissions to manage cert-manager certificates automatically. +// If the RBAC is there, no errors nor warnings are returned. +func CheckCertManagerPermissions(ctx context.Context, reviewer *rbac.Reviewer) (admission.Warnings, error) { + namespace, err := autodetectutils.GetOperatorNamespace() + if err != nil { + return nil, fmt.Errorf("%s: %w", "not possible to check RBAC rules", err) + } + + serviceAccount, err := autodetectutils.GetOperatorServiceAccount() + if err != nil { + return nil, fmt.Errorf("%s: %w", "not possible to check RBAC rules", err) + } + + rules := []*rbacv1.PolicyRule{ + { + APIGroups: []string{"cert-manager.io"}, + Resources: []string{"issuers", "certificaterequests", "certificates"}, + Verbs: []string{"create", "get", "list", "watch", "update", "patch", "delete"}, + }, + } + + if subjectAccessReviews, err := reviewer.CheckPolicyRules(ctx, serviceAccount, namespace, rules...); err != nil { + return nil, fmt.Errorf("%s: %w", "unable to check rbac rules", err) + } else if allowed, deniedReviews := rbac.AllSubjectAccessReviewsAllowed(subjectAccessReviews); !allowed { + return rbac.WarningsGroupedByResource(deniedReviews), nil + } + return nil, nil +} diff --git a/internal/autodetect/certmanager/operator.go b/internal/autodetect/certmanager/operator.go new file mode 100644 index 0000000000..19ec9baf18 --- /dev/null +++ b/internal/autodetect/certmanager/operator.go @@ -0,0 +1,30 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package certmanager + +// Availability represents that the Cert Manager CRDs are installed and the operator's service account has permissions to manage cert-manager resources. +type Availability int + +const ( + // NotAvailable Cert Manager CRDs or RBAC permissions to manage cert-manager certificates are not available. + NotAvailable Availability = iota + + // Available Cert Manager CRDs and RBAC permissions to manage cert-manager certificates are available. + Available +) + +func (p Availability) String() string { + return [...]string{"NotAvailable", "Available"}[p] +} diff --git a/internal/autodetect/main.go b/internal/autodetect/main.go index 27c368f3f5..850a907957 100644 --- a/internal/autodetect/main.go +++ b/internal/autodetect/main.go @@ -22,6 +22,7 @@ import ( "k8s.io/client-go/discovery" "k8s.io/client-go/rest" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/fips" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus" @@ -36,6 +37,7 @@ type AutoDetect interface { OpenShiftRoutesAvailability() (openshift.RoutesAvailability, error) PrometheusCRsAvailability() (prometheus.Availability, error) RBACPermissions(ctx context.Context) (autoRBAC.Availability, error) + CertManagerAvailability(ctx context.Context) (certmanager.Availability, error) FIPSEnabled(ctx context.Context) bool } @@ -125,6 +127,36 @@ func (a *autoDetect) RBACPermissions(ctx context.Context) (autoRBAC.Availability return autoRBAC.Available, nil } +func (a *autoDetect) CertManagerAvailability(ctx context.Context) (certmanager.Availability, error) { + apiList, err := a.dcl.ServerGroups() + if err != nil { + return certmanager.NotAvailable, err + } + + apiGroups := apiList.Groups + certManagerFound := false + for i := 0; i < len(apiGroups); i++ { + if apiGroups[i].Name == "cert-manager.io" { + certManagerFound = true + break + } + } + + if !certManagerFound { + return certmanager.NotAvailable, nil + } + + w, err := certmanager.CheckCertManagerPermissions(ctx, a.reviewer) + if err != nil { + return certmanager.NotAvailable, err + } + if w != nil { + return certmanager.NotAvailable, fmt.Errorf("missing permissions: %s", w) + } + + return certmanager.Available, nil +} + func (a *autoDetect) FIPSEnabled(_ context.Context) bool { return fips.IsFipsEnabled() } diff --git a/internal/autodetect/main_test.go b/internal/autodetect/main_test.go index cae05f1563..82e7a2a093 100644 --- a/internal/autodetect/main_test.go +++ b/internal/autodetect/main_test.go @@ -33,6 +33,8 @@ import ( kubeTesting "k8s.io/client-go/testing" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/autodetectutils" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus" autoRBAC "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/rbac" @@ -243,8 +245,8 @@ func TestDetectRBACPermissionsBasedOnAvailableClusterRoles(t *testing.T) { } { t.Run(tt.description, func(t *testing.T) { // These settings can be get from env vars - t.Setenv(autoRBAC.NAMESPACE_ENV_VAR, tt.namespace) - t.Setenv(autoRBAC.SA_ENV_VAR, tt.serviceAccount) + t.Setenv(autodetectutils.NAMESPACE_ENV_VAR, tt.namespace) + t.Setenv(autodetectutils.SA_ENV_VAR, tt.serviceAccount) server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {})) defer server.Close() @@ -267,3 +269,94 @@ func TestDetectRBACPermissionsBasedOnAvailableClusterRoles(t *testing.T) { }) } } + +func TestCertManagerAvailability(t *testing.T) { + // test data + for _, tt := range []struct { + description string + apiGroupList *metav1.APIGroupList + expectedAvailability certmanager.Availability + namespace string + serviceAccount string + clientGenerator fakeClientGenerator + shouldError bool + }{ + { + description: "CertManager is not installed", + namespace: "default", + serviceAccount: "defaultSA", + apiGroupList: &metav1.APIGroupList{}, + expectedAvailability: certmanager.NotAvailable, + clientGenerator: reactorFactory(v1.SubjectAccessReviewStatus{ + Allowed: true, + }), + shouldError: false, + }, + { + description: "CertManager is installed but RBAC permissions are not granted", + namespace: "default", + serviceAccount: "defaultSA", + apiGroupList: &metav1.APIGroupList{ + Groups: []metav1.APIGroup{ + { + Name: "cert-manager.io", + }, + }, + }, + expectedAvailability: certmanager.NotAvailable, + clientGenerator: reactorFactory(v1.SubjectAccessReviewStatus{ + Allowed: false, + }), + shouldError: true, + }, + { + description: "CertManager is installed and RBAC permissions are granted", + namespace: "default", + serviceAccount: "defaultSA", + apiGroupList: &metav1.APIGroupList{ + Groups: []metav1.APIGroup{ + { + Name: "cert-manager.io", + }, + }, + }, + expectedAvailability: certmanager.Available, + clientGenerator: reactorFactory(v1.SubjectAccessReviewStatus{ + Allowed: true, + }), + shouldError: false, + }, + } { + t.Run(tt.description, func(t *testing.T) { + t.Setenv(autodetectutils.NAMESPACE_ENV_VAR, tt.namespace) + t.Setenv(autodetectutils.SA_ENV_VAR, tt.serviceAccount) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { + output, err := json.Marshal(tt.apiGroupList) + require.NoError(t, err) + + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(http.StatusOK) + _, err = w.Write(output) + require.NoError(t, err) + })) + defer server.Close() + + r := rbac.NewReviewer(tt.clientGenerator()) + + aD, err := autodetect.New(&rest.Config{Host: server.URL}, r) + require.NoError(t, err) + + // test + cma, err := aD.CertManagerAvailability(context.Background()) + + // verify + assert.Equal(t, tt.expectedAvailability, cma) + if tt.shouldError { + require.Error(t, err) + } else { + assert.NoError(t, err) + } + }) + } +} diff --git a/internal/autodetect/rbac/check.go b/internal/autodetect/rbac/check.go index 1e133ebf49..9c67d79cc3 100644 --- a/internal/autodetect/rbac/check.go +++ b/internal/autodetect/rbac/check.go @@ -17,50 +17,23 @@ package rbac import ( "context" "fmt" - "os" rbacv1 "k8s.io/api/rbac/v1" "sigs.k8s.io/controller-runtime/pkg/webhook/admission" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/autodetectutils" "github.com/open-telemetry/opentelemetry-operator/internal/rbac" ) -const ( - SA_ENV_VAR = "SERVICE_ACCOUNT_NAME" - NAMESPACE_ENV_VAR = "NAMESPACE" - NAMESPACE_FILE_PATH = "/var/run/secrets/kubernetes.io/serviceaccount/namespace" -) - -func getOperatorNamespace() (string, error) { - namespace := os.Getenv(NAMESPACE_ENV_VAR) - if namespace != "" { - return namespace, nil - } - - nsBytes, err := os.ReadFile(NAMESPACE_FILE_PATH) - if err != nil { - return "", err - } - return string(nsBytes), nil -} - -func getOperatorServiceAccount() (string, error) { - sa := os.Getenv(SA_ENV_VAR) - if sa == "" { - return sa, fmt.Errorf("%s env variable not found", SA_ENV_VAR) - } - return sa, nil -} - // CheckRBACPermissions checks if the operator has the needed permissions to create RBAC resources automatically. // If the RBAC is there, no errors nor warnings are returned. func CheckRBACPermissions(ctx context.Context, reviewer *rbac.Reviewer) (admission.Warnings, error) { - namespace, err := getOperatorNamespace() + namespace, err := autodetectutils.GetOperatorNamespace() if err != nil { return nil, fmt.Errorf("%s: %w", "not possible to check RBAC rules", err) } - serviceAccount, err := getOperatorServiceAccount() + serviceAccount, err := autodetectutils.GetOperatorServiceAccount() if err != nil { return nil, fmt.Errorf("%s: %w", "not possible to check RBAC rules", err) } diff --git a/internal/components/builder.go b/internal/components/builder.go index 7abc174703..a1c9b34bcc 100644 --- a/internal/components/builder.go +++ b/internal/components/builder.go @@ -38,6 +38,7 @@ type Settings[ComponentConfigType any] struct { livenessGen ProbeGenerator[ComponentConfigType] readinessGen ProbeGenerator[ComponentConfigType] defaultsApplier Defaulter[ComponentConfigType] + envVarGen EnvVarGenerator[ComponentConfigType] } func NewEmptySettings[ComponentConfigType any]() *Settings[ComponentConfigType] { @@ -124,7 +125,11 @@ func (b Builder[ComponentConfigType]) WithReadinessGen(readinessGen ProbeGenerat o.readinessGen = readinessGen }) } - +func (b Builder[ComponentConfigType]) WithEnvVarGen(envVarGen EnvVarGenerator[ComponentConfigType]) Builder[ComponentConfigType] { + return append(b, func(o *Settings[ComponentConfigType]) { + o.envVarGen = envVarGen + }) +} func (b Builder[ComponentConfigType]) WithDefaultsApplier(defaultsApplier Defaulter[ComponentConfigType]) Builder[ComponentConfigType] { return append(b, func(o *Settings[ComponentConfigType]) { o.defaultsApplier = defaultsApplier @@ -141,6 +146,7 @@ func (b Builder[ComponentConfigType]) Build() (*GenericParser[ComponentConfigTyp name: o.name, portParser: o.portParser, rbacGen: o.rbacGen, + envVarGen: o.envVarGen, livenessGen: o.livenessGen, readinessGen: o.readinessGen, defaultsApplier: o.defaultsApplier, diff --git a/internal/components/component.go b/internal/components/component.go index 3feb56d6a7..5c8975b9c2 100644 --- a/internal/components/component.go +++ b/internal/components/component.go @@ -49,6 +49,10 @@ type RBACRuleGenerator[ComponentConfigType any] func(logger logr.Logger, config // It's expected that type Config is the configuration used by a parser. type ProbeGenerator[ComponentConfigType any] func(logger logr.Logger, config ComponentConfigType) (*corev1.Probe, error) +// EnvVarGenerator is a function that generates a list of environment variables for a given config. +// It's expected that type Config is the configuration used by a parser. +type EnvVarGenerator[ComponentConfigType any] func(logger logr.Logger, config ComponentConfigType) ([]corev1.EnvVar, error) + // Defaulter is a function that applies given defaults to the passed Config. // It's expected that type Config is the configuration used by a parser. type Defaulter[ComponentConfigType any] func(logger logr.Logger, defaultAddr string, defaultPort int32, config ComponentConfigType) (map[string]interface{}, error) @@ -105,6 +109,9 @@ type Parser interface { // GetLivenessProbe returns a liveness probe set for the collector GetLivenessProbe(logger logr.Logger, config interface{}) (*corev1.Probe, error) + // GetEnvironmentVariables returns a list of environment variables for the collector + GetEnvironmentVariables(logger logr.Logger, config interface{}) ([]corev1.EnvVar, error) + // GetReadinessProbe returns a readiness probe set for the collector GetReadinessProbe(logger logr.Logger, config interface{}) (*corev1.Probe, error) diff --git a/internal/components/extensions/helpers.go b/internal/components/extensions/helpers.go index d05a04f3d9..87708a60e1 100644 --- a/internal/components/extensions/helpers.go +++ b/internal/components/extensions/helpers.go @@ -55,6 +55,9 @@ var ( return components.ParseSingleEndpointSilent(logger, name, defaultPort, &config.SingleEndpointConfig) }). MustBuild(), + components.NewSinglePortParserBuilder("jaeger_query", 16686). + WithTargetPort(16686). + MustBuild(), } ) diff --git a/internal/components/generic_parser.go b/internal/components/generic_parser.go index 02887a5892..a3a40e819d 100644 --- a/internal/components/generic_parser.go +++ b/internal/components/generic_parser.go @@ -34,6 +34,7 @@ type GenericParser[T any] struct { settings *Settings[T] portParser PortParser[T] rbacGen RBACRuleGenerator[T] + envVarGen EnvVarGenerator[T] livenessGen ProbeGenerator[T] readinessGen ProbeGenerator[T] defaultsApplier Defaulter[T] @@ -88,6 +89,17 @@ func (g *GenericParser[T]) GetRBACRules(logger logr.Logger, config interface{}) return g.rbacGen(logger, parsed) } +func (g *GenericParser[T]) GetEnvironmentVariables(logger logr.Logger, config interface{}) ([]corev1.EnvVar, error) { + if g.envVarGen == nil { + return nil, nil + } + var parsed T + if err := mapstructure.Decode(config, &parsed); err != nil { + return nil, err + } + return g.envVarGen(logger, parsed) +} + func (g *GenericParser[T]) Ports(logger logr.Logger, name string, config interface{}) ([]corev1.ServicePort, error) { if g.portParser == nil { return nil, nil diff --git a/internal/components/multi_endpoint.go b/internal/components/multi_endpoint.go index 39449cda2b..9c7019cb6d 100644 --- a/internal/components/multi_endpoint.go +++ b/internal/components/multi_endpoint.go @@ -116,6 +116,10 @@ func (m *MultiPortReceiver) GetRBACRules(logr.Logger, interface{}) ([]rbacv1.Pol return nil, nil } +func (m *MultiPortReceiver) GetEnvironmentVariables(logger logr.Logger, config interface{}) ([]corev1.EnvVar, error) { + return nil, nil +} + type MultiPortBuilder[ComponentConfigType any] []Builder[ComponentConfigType] func NewMultiPortReceiverBuilder(name string) MultiPortBuilder[*MultiProtocolEndpointConfig] { diff --git a/internal/components/receivers/helpers.go b/internal/components/receivers/helpers.go index 89a3cb6fe7..43ebaa0d06 100644 --- a/internal/components/receivers/helpers.go +++ b/internal/components/receivers/helpers.go @@ -136,8 +136,20 @@ var ( WithProtocol(corev1.ProtocolTCP). WithTargetPort(3100). MustBuild(), + components.NewBuilder[kubeletStatsConfig]().WithName("kubeletstats"). + WithRbacGen(generateKubeletStatsRbacRules). + WithEnvVarGen(generateKubeletStatsEnvVars). + MustBuild(), + components.NewBuilder[k8seventsConfig]().WithName("k8s_events"). + WithRbacGen(generatek8seventsRbacRules). + MustBuild(), + components.NewBuilder[k8sclusterConfig]().WithName("k8s_cluster"). + WithRbacGen(generatek8sclusterRbacRules). + MustBuild(), + components.NewBuilder[k8sobjectsConfig]().WithName("k8sobjects"). + WithRbacGen(generatek8sobjectsRbacRules). + MustBuild(), NewScraperParser("prometheus"), - NewScraperParser("kubeletstats"), NewScraperParser("sshcheck"), NewScraperParser("cloudfoundry"), NewScraperParser("vcenter"), diff --git a/internal/components/receivers/k8scluster.go b/internal/components/receivers/k8scluster.go new file mode 100644 index 0000000000..aa813d9642 --- /dev/null +++ b/internal/components/receivers/k8scluster.go @@ -0,0 +1,87 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package receivers + +import ( + "github.com/go-logr/logr" + rbacv1 "k8s.io/api/rbac/v1" +) + +type k8sclusterConfig struct { + Distribution string `mapstructure:"distribution"` +} + +func generatek8sclusterRbacRules(_ logr.Logger, cfg k8sclusterConfig) ([]rbacv1.PolicyRule, error) { + policyRules := []rbacv1.PolicyRule{ + { + APIGroups: []string{""}, + Resources: []string{ + "events", + "namespaces", + "namespaces/status", + "nodes", + "nodes/spec", + "pods", + "pods/status", + "replicationcontrollers", + "replicationcontrollers/status", + "resourcequotas", + "services", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"apps"}, + Resources: []string{ + "daemonsets", + "deployments", + "replicasets", + "statefulsets", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"extensions"}, + Resources: []string{ + "daemonsets", + "deployments", + "replicasets", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"batch"}, + Resources: []string{ + "jobs", + "cronjobs", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"autoscaling"}, + Resources: []string{"horizontalpodautoscalers"}, + Verbs: []string{"get", "list", "watch"}, + }, + } + + if cfg.Distribution == "openshift" { + policyRules = append(policyRules, rbacv1.PolicyRule{ + APIGroups: []string{"quota.openshift.io"}, + Resources: []string{"clusterresourcequotas"}, + Verbs: []string{"get", "list", "watch"}, + }) + } + return policyRules, nil +} diff --git a/internal/components/receivers/k8scluster_test.go b/internal/components/receivers/k8scluster_test.go new file mode 100644 index 0000000000..36890ab60e --- /dev/null +++ b/internal/components/receivers/k8scluster_test.go @@ -0,0 +1,164 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package receivers + +import ( + "testing" + + "github.com/go-logr/logr" + "github.com/stretchr/testify/assert" + rbacv1 "k8s.io/api/rbac/v1" +) + +func Test_generatek8sclusterRbacRules(t *testing.T) { + tests := []struct { + name string + cfg k8sclusterConfig + want []rbacv1.PolicyRule + wantErr bool + }{ + { + name: "default configuration", + cfg: k8sclusterConfig{}, + want: []rbacv1.PolicyRule{ + { + APIGroups: []string{""}, + Resources: []string{ + "events", + "namespaces", + "namespaces/status", + "nodes", + "nodes/spec", + "pods", + "pods/status", + "replicationcontrollers", + "replicationcontrollers/status", + "resourcequotas", + "services", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"apps"}, + Resources: []string{ + "daemonsets", + "deployments", + "replicasets", + "statefulsets", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"extensions"}, + Resources: []string{ + "daemonsets", + "deployments", + "replicasets", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"batch"}, + Resources: []string{ + "jobs", + "cronjobs", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"autoscaling"}, + Resources: []string{"horizontalpodautoscalers"}, + Verbs: []string{"get", "list", "watch"}, + }, + }, + wantErr: false, + }, + { + name: "openshift configuration", + cfg: k8sclusterConfig{ + Distribution: "openshift", + }, + want: []rbacv1.PolicyRule{ + { + APIGroups: []string{""}, + Resources: []string{ + "events", + "namespaces", + "namespaces/status", + "nodes", + "nodes/spec", + "pods", + "pods/status", + "replicationcontrollers", + "replicationcontrollers/status", + "resourcequotas", + "services", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"apps"}, + Resources: []string{ + "daemonsets", + "deployments", + "replicasets", + "statefulsets", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"extensions"}, + Resources: []string{ + "daemonsets", + "deployments", + "replicasets", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"batch"}, + Resources: []string{ + "jobs", + "cronjobs", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"autoscaling"}, + Resources: []string{"horizontalpodautoscalers"}, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"quota.openshift.io"}, + Resources: []string{"clusterresourcequotas"}, + Verbs: []string{"get", "list", "watch"}, + }, + }, + wantErr: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got, err := generatek8sclusterRbacRules(logr.Discard(), tt.cfg) + if tt.wantErr { + assert.Error(t, err) + return + } + assert.NoError(t, err) + assert.Equal(t, tt.want, got) + }) + } +} diff --git a/internal/components/receivers/k8sevents.go b/internal/components/receivers/k8sevents.go new file mode 100644 index 0000000000..e9d6d45a88 --- /dev/null +++ b/internal/components/receivers/k8sevents.go @@ -0,0 +1,79 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package receivers + +import ( + "github.com/go-logr/logr" + rbacv1 "k8s.io/api/rbac/v1" +) + +type k8seventsConfig struct{} + +func generatek8seventsRbacRules(_ logr.Logger, _ k8seventsConfig) ([]rbacv1.PolicyRule, error) { + // The k8s Events Receiver needs get permissions on the following resources always. + return []rbacv1.PolicyRule{ + { + APIGroups: []string{""}, + Resources: []string{ + "events", + "namespaces", + "namespaces/status", + "nodes", + "nodes/spec", + "pods", + "pods/status", + "replicationcontrollers", + "replicationcontrollers/status", + "resourcequotas", + "services", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"apps"}, + Resources: []string{ + "daemonsets", + "deployments", + "replicasets", + "statefulsets", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"extensions"}, + Resources: []string{ + "daemonsets", + "deployments", + "replicasets", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"batch"}, + Resources: []string{ + "jobs", + "cronjobs", + }, + Verbs: []string{"get", "list", "watch"}, + }, + { + APIGroups: []string{"autoscaling"}, + Resources: []string{ + "horizontalpodautoscalers", + }, + Verbs: []string{"get", "list", "watch"}, + }, + }, nil +} diff --git a/internal/components/receivers/k8sobjects.go b/internal/components/receivers/k8sobjects.go new file mode 100644 index 0000000000..10505ad35c --- /dev/null +++ b/internal/components/receivers/k8sobjects.go @@ -0,0 +1,49 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package receivers + +import ( + "github.com/go-logr/logr" + rbacv1 "k8s.io/api/rbac/v1" +) + +type k8sobjectsConfig struct { + Objects []k8sObject `yaml:"objects"` +} + +type k8sObject struct { + Name string `yaml:"name"` + Mode string `yaml:"mode"` + Group string `yaml:"group,omitempty"` +} + +func generatek8sobjectsRbacRules(_ logr.Logger, config k8sobjectsConfig) ([]rbacv1.PolicyRule, error) { + // https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/k8sobjectsreceiver#rbac + prs := []rbacv1.PolicyRule{} + for _, obj := range config.Objects { + permissions := []string{"list"} + if obj.Mode == "pull" && (obj.Name != "events" && obj.Name != "events.k8s.io") { + permissions = append(permissions, "get") + } else if obj.Mode == "watch" { + permissions = append(permissions, "watch") + } + prs = append(prs, rbacv1.PolicyRule{ + APIGroups: []string{obj.Group}, + Resources: []string{obj.Name}, + Verbs: permissions, + }) + } + return prs, nil +} diff --git a/internal/components/receivers/k8sobjects_test.go b/internal/components/receivers/k8sobjects_test.go new file mode 100644 index 0000000000..647882f572 --- /dev/null +++ b/internal/components/receivers/k8sobjects_test.go @@ -0,0 +1,136 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package receivers + +import ( + "testing" + + "github.com/go-logr/logr" + "github.com/stretchr/testify/assert" + rbacv1 "k8s.io/api/rbac/v1" +) + +func Test_generatek8sobjectsRbacRules(t *testing.T) { + tests := []struct { + name string + config k8sobjectsConfig + want []rbacv1.PolicyRule + }{ + { + name: "basic watch mode", + config: k8sobjectsConfig{ + Objects: []k8sObject{ + { + Name: "pods", + Mode: "watch", + Group: "v1", + }, + }, + }, + want: []rbacv1.PolicyRule{ + { + APIGroups: []string{"v1"}, + Resources: []string{"pods"}, + Verbs: []string{"list", "watch"}, + }, + }, + }, + { + name: "pull mode with events", + config: k8sobjectsConfig{ + Objects: []k8sObject{ + { + Name: "events", + Mode: "pull", + Group: "v1", + }, + }, + }, + want: []rbacv1.PolicyRule{ + { + APIGroups: []string{"v1"}, + Resources: []string{"events"}, + Verbs: []string{"list"}, + }, + }, + }, + { + name: "pull mode with non-events", + config: k8sobjectsConfig{ + Objects: []k8sObject{ + { + Name: "pods", + Mode: "pull", + Group: "v1", + }, + }, + }, + want: []rbacv1.PolicyRule{ + { + APIGroups: []string{"v1"}, + Resources: []string{"pods"}, + Verbs: []string{"list", "get"}, + }, + }, + }, + { + name: "multiple objects", + config: k8sobjectsConfig{ + Objects: []k8sObject{ + { + Name: "pods", + Mode: "pull", + Group: "v1", + }, + { + Name: "events", + Mode: "pull", + Group: "v1", + }, + { + Name: "deployments", + Mode: "watch", + Group: "apps/v1", + }, + }, + }, + want: []rbacv1.PolicyRule{ + { + APIGroups: []string{"v1"}, + Resources: []string{"pods"}, + Verbs: []string{"list", "get"}, + }, + { + APIGroups: []string{"v1"}, + Resources: []string{"events"}, + Verbs: []string{"list"}, + }, + { + APIGroups: []string{"apps/v1"}, + Resources: []string{"deployments"}, + Verbs: []string{"list", "watch"}, + }, + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got, err := generatek8sobjectsRbacRules(logr.Logger{}, tt.config) + assert.NoError(t, err) + assert.Equal(t, tt.want, got) + }) + } +} diff --git a/internal/components/receivers/kubeletstats.go b/internal/components/receivers/kubeletstats.go new file mode 100644 index 0000000000..43f2be8697 --- /dev/null +++ b/internal/components/receivers/kubeletstats.go @@ -0,0 +1,95 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package receivers + +import ( + "github.com/go-logr/logr" + corev1 "k8s.io/api/core/v1" + rbacv1 "k8s.io/api/rbac/v1" +) + +type metricConfig struct { + Enabled bool `mapstructure:"enabled"` +} + +type metrics struct { + K8sContainerCPULimitUtilization metricConfig `mapstructure:"k8s.container.cpu_limit_utilization"` + K8sContainerCPURequestUtilization metricConfig `mapstructure:"k8s.container.cpu_request_utilization"` + K8sContainerMemoryLimitUtilization metricConfig `mapstructure:"k8s.container.memory_limit_utilization"` + K8sContainerMemoryRequestUtilization metricConfig `mapstructure:"k8s.container.memory_request_utilization"` + K8sPodCPULimitUtilization metricConfig `mapstructure:"k8s.pod.cpu_limit_utilization"` + K8sPodCPURequestUtilization metricConfig `mapstructure:"k8s.pod.cpu_request_utilization"` + K8sPodMemoryLimitUtilization metricConfig `mapstructure:"k8s.pod.memory_limit_utilization"` + K8sPodMemoryRequestUtilization metricConfig `mapstructure:"k8s.pod.memory_request_utilization"` +} + +// KubeletStatsConfig is a minimal struct needed for parsing a valid kubeletstats receiver configuration +// This only contains the fields necessary for parsing, other fields can be added in the future. +type kubeletStatsConfig struct { + ExtraMetadataLabels []string `mapstructure:"extra_metadata_labels"` + Metrics metrics `mapstructure:"metrics"` + AuthType string `mapstructure:"auth_type"` +} + +func generateKubeletStatsEnvVars(_ logr.Logger, config kubeletStatsConfig) ([]corev1.EnvVar, error) { + // The documentation mentions that the K8S_NODE_NAME environment variable is required when using the serviceAccount auth type. + // Also, it mentions that it is a good idea to use it for the Read Only Endpoint. Added always to make it easier for users. + // https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/kubeletstatsreceiver/README.md + return []corev1.EnvVar{ + {Name: "K8S_NODE_NAME", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{FieldPath: "spec.nodeName"}}}, + }, nil +} + +func generateKubeletStatsRbacRules(_ logr.Logger, config kubeletStatsConfig) ([]rbacv1.PolicyRule, error) { + // The Kubelet Stats Receiver needs get permissions on the nodes/stats resources always. + prs := []rbacv1.PolicyRule{ + { + APIGroups: []string{""}, + Resources: []string{"nodes/stats"}, + Verbs: []string{"get"}, + }, + } + + // Additionally, when using extra_metadata_labels or any of the {request|limit}_utilization metrics + // the processor also needs get permissions for nodes/proxy resources. + nodesProxyPr := rbacv1.PolicyRule{ + APIGroups: []string{""}, + Resources: []string{"nodes/proxy"}, + Verbs: []string{"get"}, + } + + if len(config.ExtraMetadataLabels) > 0 { + prs = append(prs, nodesProxyPr) + return prs, nil + } + + metrics := []bool{ + config.Metrics.K8sContainerCPULimitUtilization.Enabled, + config.Metrics.K8sContainerCPURequestUtilization.Enabled, + config.Metrics.K8sContainerMemoryLimitUtilization.Enabled, + config.Metrics.K8sContainerMemoryRequestUtilization.Enabled, + config.Metrics.K8sPodCPULimitUtilization.Enabled, + config.Metrics.K8sPodCPURequestUtilization.Enabled, + config.Metrics.K8sPodMemoryLimitUtilization.Enabled, + config.Metrics.K8sPodMemoryRequestUtilization.Enabled, + } + for _, metric := range metrics { + if metric { + prs = append(prs, nodesProxyPr) + return prs, nil + } + } + return prs, nil +} diff --git a/internal/components/receivers/kubeletstats_test.go b/internal/components/receivers/kubeletstats_test.go new file mode 100644 index 0000000000..246aec5dee --- /dev/null +++ b/internal/components/receivers/kubeletstats_test.go @@ -0,0 +1,99 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package receivers + +import ( + "testing" + + "github.com/go-logr/logr" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + rbacv1 "k8s.io/api/rbac/v1" +) + +func TestGenerateKubeletStatsRbacRules(t *testing.T) { + baseRule := rbacv1.PolicyRule{ + APIGroups: []string{""}, + Resources: []string{"nodes/stats"}, + Verbs: []string{"get"}, + } + + proxyRule := rbacv1.PolicyRule{ + APIGroups: []string{""}, + Resources: []string{"nodes/proxy"}, + Verbs: []string{"get"}, + } + + tests := []struct { + name string + config kubeletStatsConfig + expectedRules []rbacv1.PolicyRule + expectedErrMsg string + }{ + { + name: "Default config", + config: kubeletStatsConfig{}, + expectedRules: []rbacv1.PolicyRule{baseRule}, + }, + { + name: "Extra metadata labels", + config: kubeletStatsConfig{ + ExtraMetadataLabels: []string{"label1", "label2"}, + }, + expectedRules: []rbacv1.PolicyRule{baseRule, proxyRule}, + }, + { + name: "CPU limit utilization enabled", + config: kubeletStatsConfig{ + Metrics: metrics{ + K8sContainerCPULimitUtilization: metricConfig{Enabled: true}, + }, + }, + expectedRules: []rbacv1.PolicyRule{baseRule, proxyRule}, + }, + { + name: "Memory request utilization enabled", + config: kubeletStatsConfig{ + Metrics: metrics{ + K8sPodMemoryRequestUtilization: metricConfig{Enabled: true}, + }, + }, + expectedRules: []rbacv1.PolicyRule{baseRule, proxyRule}, + }, + { + name: "No extra permissions needed", + config: kubeletStatsConfig{ + Metrics: metrics{ + K8sContainerCPULimitUtilization: metricConfig{Enabled: false}, + }, + }, + expectedRules: []rbacv1.PolicyRule{baseRule}, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + rules, err := generateKubeletStatsRbacRules(logr.Logger{}, tt.config) + + if tt.expectedErrMsg != "" { + require.Error(t, err) + assert.Contains(t, err.Error(), tt.expectedErrMsg) + } else { + require.NoError(t, err) + assert.Equal(t, tt.expectedRules, rules) + } + }) + } +} diff --git a/internal/components/receivers/single_endpoint_receiver_test.go b/internal/components/receivers/single_endpoint_receiver_test.go index faaae6dbd7..abb866a8d8 100644 --- a/internal/components/receivers/single_endpoint_receiver_test.go +++ b/internal/components/receivers/single_endpoint_receiver_test.go @@ -83,7 +83,6 @@ func TestDownstreamParsers(t *testing.T) { {"awsxray", "awsxray", "__awsxray", 2000, false}, {"tcplog", "tcplog", "__tcplog", 0, true}, {"udplog", "udplog", "__udplog", 0, true}, - {"k8s_cluster", "k8s_cluster", "__k8s_cluster", 0, false}, } { t.Run(tt.receiverName, func(t *testing.T) { t.Run("builds successfully", func(t *testing.T) { diff --git a/internal/config/main.go b/internal/config/main.go index 48a09faa67..434ae5493f 100644 --- a/internal/config/main.go +++ b/internal/config/main.go @@ -23,6 +23,7 @@ import ( logf "sigs.k8s.io/controller-runtime/pkg/log" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus" autoRBAC "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/rbac" @@ -65,6 +66,7 @@ type Config struct { openshiftRoutesAvailability openshift.RoutesAvailability prometheusCRAvailability prometheus.Availability + certManagerAvailability certmanager.Availability labelsFilter []string annotationsFilter []string } @@ -76,6 +78,7 @@ func New(opts ...Option) Config { prometheusCRAvailability: prometheus.NotAvailable, openshiftRoutesAvailability: openshift.RoutesNotAvailable, createRBACPermissions: autoRBAC.NotAvailable, + certManagerAvailability: certmanager.NotAvailable, collectorConfigMapEntry: defaultCollectorConfigMapEntry, targetAllocatorConfigMapEntry: defaultTargetAllocatorConfigMapEntry, operatorOpAMPBridgeConfigMapEntry: defaultOperatorOpAMPBridgeConfigMapEntry, @@ -108,6 +111,7 @@ func New(opts ...Option) Config { logger: o.logger, openshiftRoutesAvailability: o.openshiftRoutesAvailability, prometheusCRAvailability: o.prometheusCRAvailability, + certManagerAvailability: o.certManagerAvailability, autoInstrumentationJavaImage: o.autoInstrumentationJavaImage, autoInstrumentationNodeJSImage: o.autoInstrumentationNodeJSImage, autoInstrumentationPythonImage: o.autoInstrumentationPythonImage, @@ -146,6 +150,13 @@ func (c *Config) AutoDetect() error { c.createRBACPermissions = rAuto c.logger.V(2).Info("create rbac permissions detected", "availability", rAuto) + cmAvl, err := c.autoDetect.CertManagerAvailability(context.Background()) + if err != nil { + c.logger.V(2).Info("the cert manager crd and permissions are not set for the operator", "reason", err) + } + c.certManagerAvailability = cmAvl + c.logger.V(2).Info("the cert manager crd and permissions are set for the operator", "availability", cmAvl) + return nil } @@ -234,6 +245,11 @@ func (c *Config) PrometheusCRAvailability() prometheus.Availability { return c.prometheusCRAvailability } +// CertManagerAvailability represents the availability of the Cert-Manager. +func (c *Config) CertManagerAvailability() certmanager.Availability { + return c.certManagerAvailability +} + // AutoInstrumentationJavaImage returns OpenTelemetry Java auto-instrumentation container image. func (c *Config) AutoInstrumentationJavaImage() string { return c.autoInstrumentationJavaImage diff --git a/internal/config/main_test.go b/internal/config/main_test.go index 08882a0392..4d075e62bb 100644 --- a/internal/config/main_test.go +++ b/internal/config/main_test.go @@ -22,6 +22,7 @@ import ( "github.com/stretchr/testify/require" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/rbac" @@ -56,6 +57,9 @@ func TestConfigChangesOnAutoDetect(t *testing.T) { RBACPermissionsFunc: func(ctx context.Context) (rbac.Availability, error) { return rbac.Available, nil }, + CertManagerAvailabilityFunc: func(ctx context.Context) (certmanager.Availability, error) { + return certmanager.Available, nil + }, } cfg := config.New( config.WithAutoDetect(mock), @@ -80,6 +84,7 @@ type mockAutoDetect struct { OpenShiftRoutesAvailabilityFunc func() (openshift.RoutesAvailability, error) PrometheusCRsAvailabilityFunc func() (prometheus.Availability, error) RBACPermissionsFunc func(ctx context.Context) (rbac.Availability, error) + CertManagerAvailabilityFunc func(ctx context.Context) (certmanager.Availability, error) } func (m *mockAutoDetect) FIPSEnabled(_ context.Context) bool { @@ -106,3 +111,10 @@ func (m *mockAutoDetect) RBACPermissions(ctx context.Context) (rbac.Availability } return rbac.NotAvailable, nil } + +func (m *mockAutoDetect) CertManagerAvailability(ctx context.Context) (certmanager.Availability, error) { + if m.CertManagerAvailabilityFunc != nil { + return m.CertManagerAvailabilityFunc(ctx) + } + return certmanager.NotAvailable, nil +} diff --git a/internal/config/options.go b/internal/config/options.go index 5cb687337e..6046dcc356 100644 --- a/internal/config/options.go +++ b/internal/config/options.go @@ -19,6 +19,7 @@ import ( "go.uber.org/zap/zapcore" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus" autoRBAC "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/rbac" @@ -56,6 +57,7 @@ type options struct { operatorOpAMPBridgeImage string openshiftRoutesAvailability openshift.RoutesAvailability prometheusCRAvailability prometheus.Availability + certManagerAvailability certmanager.Availability labelsFilter []string annotationsFilter []string } @@ -206,6 +208,12 @@ func WithRBACPermissions(rAuto autoRBAC.Availability) Option { } } +func WithCertManagerAvailability(cmAvl certmanager.Availability) Option { + return func(o *options) { + o.certManagerAvailability = cmAvl + } +} + func WithLabelFilters(labelFilters []string) Option { return func(o *options) { o.labelsFilter = append(o.labelsFilter, labelFilters...) diff --git a/internal/manifests/collector/collector.go b/internal/manifests/collector/collector.go index f8e78e5f9f..0e4cc414d5 100644 --- a/internal/manifests/collector/collector.go +++ b/internal/manifests/collector/collector.go @@ -15,6 +15,9 @@ package collector import ( + "errors" + "fmt" + "sigs.k8s.io/controller-runtime/pkg/client" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" @@ -50,9 +53,14 @@ func Build(params manifests.Params) ([]client.Object, error) { manifests.Factory(Service), manifests.Factory(HeadlessService), manifests.Factory(MonitoringService), + manifests.Factory(ExtensionService), manifests.Factory(Ingress), }...) + if featuregate.CollectorUsesTargetAllocatorCR.IsEnabled() { + manifestFactories = append(manifestFactories, manifests.Factory(TargetAllocator)) + } + if params.OtelCol.Spec.Observability.Metrics.EnableMetrics && featuregate.PrometheusOperatorIsAvailable.IsEnabled() { if params.OtelCol.Spec.Mode == v1beta1.ModeSidecar { manifestFactories = append(manifestFactories, manifests.Factory(PodMonitor)) @@ -76,6 +84,20 @@ func Build(params manifests.Params) ([]client.Object, error) { resourceManifests = append(resourceManifests, res) } } + + if needsCheckSaPermissions(params) { + warnings, err := CheckRbacRules(params, params.OtelCol.Spec.ServiceAccount) + if err != nil { + return nil, fmt.Errorf("error checking RBAC rules for serviceAccount %s: %w", params.OtelCol.Spec.ServiceAccount, err) + } + + var w []error + for _, warning := range warnings { + w = append(w, fmt.Errorf("RBAC rules are missing: %s", warning)) + } + return nil, errors.Join(w...) + } + routes, err := Routes(params) if err != nil { return nil, err @@ -86,3 +108,10 @@ func Build(params manifests.Params) ([]client.Object, error) { } return resourceManifests, nil } + +func needsCheckSaPermissions(params manifests.Params) bool { + return params.ErrorAsWarning && + params.Config.CreateRBACPermissions() == rbac.NotAvailable && + params.Reviewer != nil && + params.OtelCol.Spec.ServiceAccount != "" +} diff --git a/internal/manifests/collector/collector_test.go b/internal/manifests/collector/collector_test.go new file mode 100644 index 0000000000..473b2c6ab9 --- /dev/null +++ b/internal/manifests/collector/collector_test.go @@ -0,0 +1,343 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package collector + +import ( + "context" + "fmt" + "testing" + + "github.com/go-logr/logr" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + otelColFeatureGate "go.opentelemetry.io/collector/featuregate" + v1 "k8s.io/api/authorization/v1" + rbacv1 "k8s.io/api/rbac/v1" + + "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus" + autoRbac "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/rbac" + "github.com/open-telemetry/opentelemetry-operator/internal/config" + "github.com/open-telemetry/opentelemetry-operator/internal/manifests" + irbac "github.com/open-telemetry/opentelemetry-operator/internal/rbac" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" +) + +func TestNeedsCheckSaPermissions(t *testing.T) { + tests := []struct { + name string + params manifests.Params + expected bool + }{ + { + name: "should return true when all conditions are met", + params: manifests.Params{ + ErrorAsWarning: true, + Config: config.New(config.WithRBACPermissions(autoRbac.NotAvailable)), + Reviewer: &mockReviewer{}, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ + ServiceAccount: "test-sa", + }, + }, + }, + }, + expected: true, + }, + { + name: "should return false when ErrorAsWarning is false", + params: manifests.Params{ + ErrorAsWarning: false, + Config: config.New(config.WithRBACPermissions(autoRbac.NotAvailable)), + Reviewer: &mockReviewer{}, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ + ServiceAccount: "test-sa", + }, + }, + }, + }, + expected: false, + }, + { + name: "should return false when RBAC is available", + params: manifests.Params{ + ErrorAsWarning: true, + Config: config.New(config.WithRBACPermissions(autoRbac.Available)), + Reviewer: &mockReviewer{}, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ + ServiceAccount: "test-sa", + }, + }, + }, + }, + expected: false, + }, + { + name: "should return false when Reviewer is nil", + params: manifests.Params{ + ErrorAsWarning: true, + Config: config.New(config.WithRBACPermissions(autoRbac.NotAvailable)), + Reviewer: nil, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ + ServiceAccount: "test-sa", + }, + }, + }, + }, + expected: false, + }, + { + name: "should return false when ServiceAccount is empty", + params: manifests.Params{ + ErrorAsWarning: true, + Config: config.New(config.WithRBACPermissions(autoRbac.NotAvailable)), + Reviewer: &mockReviewer{}, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ + ServiceAccount: "", + }, + }, + }, + }, + expected: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := needsCheckSaPermissions(tt.params) + assert.Equal(t, tt.expected, result) + }) + } +} + +type mockReviewer struct{} + +var _ irbac.SAReviewer = &mockReviewer{} + +func (m *mockReviewer) CheckPolicyRules(ctx context.Context, serviceAccount, serviceAccountNamespace string, rules ...*rbacv1.PolicyRule) ([]*v1.SubjectAccessReview, error) { + return nil, fmt.Errorf("error checking policy rules") +} + +func (m *mockReviewer) CanAccess(ctx context.Context, serviceAccount, serviceAccountNamespace string, res *v1.ResourceAttributes, nonResourceAttributes *v1.NonResourceAttributes) (*v1.SubjectAccessReview, error) { + return nil, nil +} + +func TestBuild(t *testing.T) { + logger := logr.Discard() + tests := []struct { + name string + params manifests.Params + expectedObjects int + wantErr bool + featureGate *otelColFeatureGate.Gate + }{ + { + name: "deployment mode builds expected manifests", + params: manifests.Params{ + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Mode: v1beta1.ModeDeployment, + }, + }, + Config: config.New(), + }, + expectedObjects: 5, + wantErr: false, + }, + { + name: "statefulset mode builds expected manifests", + params: manifests.Params{ + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Mode: v1beta1.ModeStatefulSet, + }, + }, + Config: config.New(), + }, + expectedObjects: 5, + wantErr: false, + }, + { + name: "sidecar mode skips deployment manifests", + params: manifests.Params{ + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Mode: v1beta1.ModeSidecar, + }, + }, + Config: config.New(), + }, + expectedObjects: 3, + wantErr: false, + }, + { + name: "rbac available adds cluster role manifests", + params: manifests.Params{ + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Mode: v1beta1.ModeDeployment, + Config: v1beta1.Config{ + Processors: &v1beta1.AnyConfig{ + Object: map[string]any{ + "k8sattributes": map[string]any{}, + }, + }, + Service: v1beta1.Service{ + Pipelines: map[string]*v1beta1.Pipeline{ + "traces": { + Processors: []string{"k8sattributes"}, + }, + }, + }, + }, + }, + }, + Config: config.New(config.WithRBACPermissions(autoRbac.Available)), + }, + expectedObjects: 7, + wantErr: false, + }, + { + name: "metrics enabled adds monitoring service monitor", + params: manifests.Params{ + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Mode: v1beta1.ModeDeployment, + Observability: v1beta1.ObservabilitySpec{ + Metrics: v1beta1.MetricsConfigSpec{ + EnableMetrics: true, + }, + }, + }, + }, + Config: config.New(config.WithPrometheusCRAvailability(prometheus.Available)), + }, + expectedObjects: 6, + wantErr: false, + featureGate: featuregate.PrometheusOperatorIsAvailable, + }, + { + name: "metrics enabled adds service monitors", + params: manifests.Params{ + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Mode: v1beta1.ModeDeployment, + Observability: v1beta1.ObservabilitySpec{ + Metrics: v1beta1.MetricsConfigSpec{ + EnableMetrics: true, + }, + }, + Config: v1beta1.Config{ + Exporters: v1beta1.AnyConfig{ + Object: map[string]any{ + "prometheus": map[string]any{ + "endpoint": "1.2.3.4:1234", + }, + }, + }, + Service: v1beta1.Service{ + Pipelines: map[string]*v1beta1.Pipeline{ + "metrics": { + Exporters: []string{"prometheus"}, + }, + }, + }, + }, + }, + }, + Config: config.New(config.WithPrometheusCRAvailability(prometheus.Available)), + }, + expectedObjects: 9, + wantErr: false, + featureGate: featuregate.PrometheusOperatorIsAvailable, + }, + { + name: "check sa permissions", + params: manifests.Params{ + ErrorAsWarning: true, + Reviewer: &mockReviewer{}, + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + Spec: v1beta1.OpenTelemetryCollectorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ + ServiceAccount: "test-sa", + }, + Mode: v1beta1.ModeDeployment, + Observability: v1beta1.ObservabilitySpec{ + Metrics: v1beta1.MetricsConfigSpec{ + EnableMetrics: true, + }, + }, + Config: v1beta1.Config{ + Processors: &v1beta1.AnyConfig{ + Object: map[string]any{ + "k8sattributes": map[string]any{}, + }, + }, + Service: v1beta1.Service{ + Pipelines: map[string]*v1beta1.Pipeline{ + "metrics": { + Processors: []string{"k8sattributes"}, + }, + }, + }, + }, + }, + }, + Config: config.New(config.WithPrometheusCRAvailability(prometheus.Available)), + }, + expectedObjects: 9, + wantErr: true, + featureGate: featuregate.PrometheusOperatorIsAvailable, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + if tt.featureGate != nil { + err := otelColFeatureGate.GlobalRegistry().Set(tt.featureGate.ID(), true) + require.NoError(t, err) + defer func() { + err := otelColFeatureGate.GlobalRegistry().Set(tt.featureGate.ID(), false) + require.NoError(t, err) + }() + } + + objects, err := Build(tt.params) + if tt.wantErr { + require.Error(t, err) + return + } + + require.NoError(t, err) + assert.Len(t, objects, tt.expectedObjects) + }) + } +} diff --git a/internal/manifests/collector/config_replace.go b/internal/manifests/collector/config_replace.go index 6ea55dc44d..6ba35ed435 100644 --- a/internal/manifests/collector/config_replace.go +++ b/internal/manifests/collector/config_replace.go @@ -42,7 +42,7 @@ type Config struct { TargetAllocConfig *targetAllocator `yaml:"target_allocator,omitempty"` } -func ReplaceConfig(otelcol v1beta1.OpenTelemetryCollector, targetAllocator *v1alpha1.TargetAllocator) (string, error) { +func ReplaceConfig(otelcol v1beta1.OpenTelemetryCollector, targetAllocator *v1alpha1.TargetAllocator, options ...ta.TAOption) (string, error) { collectorSpec := otelcol.Spec taEnabled := targetAllocator != nil cfgStr, err := collectorSpec.Config.Yaml() @@ -71,7 +71,7 @@ func ReplaceConfig(otelcol v1beta1.OpenTelemetryCollector, targetAllocator *v1al // To avoid issues caused by Prometheus validation logic, which fails regex validation when it encounters // $$ in the prom config, we update the YAML file directly without marshaling and unmarshalling. - updPromCfgMap, getCfgPromErr := ta.AddTAConfigToPromConfig(promCfgMap, naming.TAService(targetAllocator.Name)) + updPromCfgMap, getCfgPromErr := ta.AddTAConfigToPromConfig(promCfgMap, naming.TAService(targetAllocator.Name), options...) if getCfgPromErr != nil { return "", getCfgPromErr } diff --git a/internal/manifests/collector/configmap.go b/internal/manifests/collector/configmap.go index 54362549ad..b611dea178 100644 --- a/internal/manifests/collector/configmap.go +++ b/internal/manifests/collector/configmap.go @@ -15,12 +15,18 @@ package collector import ( + "path/filepath" + corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/manifests" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" + ta "github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator/adapters" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/constants" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) func ConfigMap(params manifests.Params) (*corev1.ConfigMap, error) { @@ -37,7 +43,19 @@ func ConfigMap(params manifests.Params) (*corev1.ConfigMap, error) { return nil, err } - replacedConf, err := ReplaceConfig(params.OtelCol, params.TargetAllocator) + replaceCfgOpts := []ta.TAOption{} + + if params.Config.CertManagerAvailability() == certmanager.Available && featuregate.EnableTargetAllocatorMTLS.IsEnabled() { + replaceCfgOpts = append(replaceCfgOpts, ta.WithTLSConfig( + filepath.Join(constants.TACollectorTLSDirPath, constants.TACollectorCAFileName), + filepath.Join(constants.TACollectorTLSDirPath, constants.TACollectorTLSCertFileName), + filepath.Join(constants.TACollectorTLSDirPath, constants.TACollectorTLSKeyFileName), + naming.TAService(params.OtelCol.Name)), + ) + } + + replacedConf, err := ReplaceConfig(params.OtelCol, params.TargetAllocator, replaceCfgOpts...) + if err != nil { params.Log.V(2).Info("failed to update prometheus config to use sharded targets: ", "err", err) return nil, err diff --git a/internal/manifests/collector/configmap_test.go b/internal/manifests/collector/configmap_test.go index fc66cf3794..a6469704ea 100644 --- a/internal/manifests/collector/configmap_test.go +++ b/internal/manifests/collector/configmap_test.go @@ -18,9 +18,14 @@ import ( "testing" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + colfg "go.opentelemetry.io/collector/featuregate" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" + "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) func TestDesiredConfigMap(t *testing.T) { @@ -123,4 +128,58 @@ service: }) + t.Run("should return expected escaped collector config map with target_allocator and https config block", func(t *testing.T) { + expectedData := map[string]string{ + "collector.yaml": `exporters: + debug: +receivers: + prometheus: + config: {} + target_allocator: + collector_id: ${POD_NAME} + endpoint: https://test-targetallocator:443 + interval: 30s + tls: + ca_file: /tls/ca.crt + cert_file: /tls/tls.crt + key_file: /tls/tls.key +service: + pipelines: + metrics: + exporters: + - debug + receivers: + - prometheus +`, + } + + param, err := newParams("test/test-img", "testdata/http_sd_config_servicemonitor_test.yaml", config.WithCertManagerAvailability(certmanager.Available)) + require.NoError(t, err) + flgs := featuregate.Flags(colfg.GlobalRegistry()) + err = flgs.Parse([]string{"--feature-gates=operator.targetallocator.mtls"}) + require.NoError(t, err) + + hash, _ := manifestutils.GetConfigMapSHA(param.OtelCol.Spec.Config) + expectedName := naming.ConfigMap("test", hash) + + expectedLables["app.kubernetes.io/component"] = "opentelemetry-collector" + expectedLables["app.kubernetes.io/name"] = "test-collector" + expectedLables["app.kubernetes.io/version"] = "latest" + + param.OtelCol.Spec.TargetAllocator.Enabled = true + actual, err := ConfigMap(param) + + assert.NoError(t, err) + assert.Equal(t, expectedName, actual.Name) + assert.Equal(t, expectedLables, actual.Labels) + assert.Equal(t, len(expectedData), len(actual.Data)) + for k, expected := range expectedData { + assert.YAMLEq(t, expected, actual.Data[k]) + } + + // Reset the value + expectedLables["app.kubernetes.io/version"] = "0.47.0" + assert.NoError(t, err) + + }) } diff --git a/internal/manifests/collector/container.go b/internal/manifests/collector/container.go index 3cf0b1a2e4..f499f08c55 100644 --- a/internal/manifests/collector/container.go +++ b/internal/manifests/collector/container.go @@ -25,8 +25,10 @@ import ( "k8s.io/apimachinery/pkg/util/validation" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/constants" "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) @@ -83,6 +85,14 @@ func Container(cfg config.Config, logger logr.Logger, otelcol v1beta1.OpenTeleme }) } + if cfg.CertManagerAvailability() == certmanager.Available && featuregate.EnableTargetAllocatorMTLS.IsEnabled() { + volumeMounts = append(volumeMounts, + corev1.VolumeMount{ + Name: naming.TAClientCertificate(otelcol.Name), + MountPath: constants.TACollectorTLSDirPath, + }) + } + // ensure that the v1alpha1.OpenTelemetryCollectorSpec.Args are ordered when moved to container.Args, // where iterating over a map does not guarantee, so that reconcile will not be fooled by different // ordering in args. @@ -167,6 +177,12 @@ func Container(cfg config.Config, logger logr.Logger, otelcol v1beta1.OpenTeleme ) } + if configEnvVars, err := otelcol.Spec.Config.GetEnvironmentVariables(logger); err != nil { + logger.Error(err, "could not get the environment variables from the config") + } else { + envVars = append(envVars, configEnvVars...) + } + envVars = append(envVars, proxy.ReadProxyVarsFromEnv()...) return corev1.Container{ Name: naming.Container(), @@ -213,7 +229,7 @@ func getConfigContainerPorts(logger logr.Logger, conf v1beta1.Config) (map[strin } } - metricsPort, err := conf.Service.MetricsPort() + _, metricsPort, err := conf.Service.MetricsEndpoint() if err != nil { logger.Info("couldn't determine metrics port from configuration, using 8888 default value", "error", err) metricsPort = 8888 diff --git a/internal/manifests/collector/container_test.go b/internal/manifests/collector/container_test.go index 597e98c1e7..3f48fc26da 100644 --- a/internal/manifests/collector/container_test.go +++ b/internal/manifests/collector/container_test.go @@ -20,14 +20,19 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + colfg "go.opentelemetry.io/collector/featuregate" "gopkg.in/yaml.v3" corev1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/resource" logf "sigs.k8s.io/controller-runtime/pkg/log" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" . "github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector" + "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/constants" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) var logger = logf.Log.WithName("unit-tests") @@ -860,3 +865,22 @@ func mustUnmarshalToConfig(t *testing.T, config string) v1beta1.Config { } return cfg } + +func TestContainerWithCertManagerAvailable(t *testing.T) { + otelcol := v1beta1.OpenTelemetryCollector{} + + cfg := config.New(config.WithCertManagerAvailability(certmanager.Available)) + + flgs := featuregate.Flags(colfg.GlobalRegistry()) + err := flgs.Parse([]string{"--feature-gates=operator.targetallocator.mtls"}) + require.NoError(t, err) + + // test + c := Container(cfg, logger, otelcol, true) + + // verify + assert.Contains(t, c.VolumeMounts, corev1.VolumeMount{ + Name: naming.TAClientCertificate(""), + MountPath: constants.TACollectorTLSDirPath, + }) +} diff --git a/internal/manifests/collector/rbac.go b/internal/manifests/collector/rbac.go index 610d948b67..9ae0a65f1f 100644 --- a/internal/manifests/collector/rbac.go +++ b/internal/manifests/collector/rbac.go @@ -15,12 +15,16 @@ package collector import ( + "context" + "fmt" + rbacv1 "k8s.io/api/rbac/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "github.com/open-telemetry/opentelemetry-operator/internal/manifests" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/internal/rbac" ) func ClusterRole(params manifests.Params) (*rbacv1.ClusterRole, error) { @@ -85,3 +89,26 @@ func ClusterRoleBinding(params manifests.Params) (*rbacv1.ClusterRoleBinding, er }, }, nil } + +func CheckRbacRules(params manifests.Params, saName string) ([]string, error) { + ctx := context.Background() + + rules, err := params.OtelCol.Spec.Config.GetAllRbacRules(params.Log) + if err != nil { + return nil, err + } + + r := []*rbacv1.PolicyRule{} + + for _, rule := range rules { + rule := rule + r = append(r, &rule) + } + + if subjectAccessReviews, err := params.Reviewer.CheckPolicyRules(ctx, saName, params.OtelCol.Namespace, r...); err != nil { + return nil, fmt.Errorf("%s: %w", "unable to check rbac rules", err) + } else if allowed, deniedReviews := rbac.AllSubjectAccessReviewsAllowed(subjectAccessReviews); !allowed { + return rbac.WarningsGroupedByResource(deniedReviews), nil + } + return nil, nil +} diff --git a/internal/manifests/collector/service.go b/internal/manifests/collector/service.go index e5d4c65ead..7e27eb752c 100644 --- a/internal/manifests/collector/service.go +++ b/internal/manifests/collector/service.go @@ -42,10 +42,11 @@ const ( BaseServiceType ServiceType = iota HeadlessServiceType MonitoringServiceType + ExtensionServiceType ) func (s ServiceType) String() string { - return [...]string{"base", "headless", "monitoring"}[s] + return [...]string{"base", "headless", "monitoring", "extension"}[s] } func HeadlessService(params manifests.Params) (*corev1.Service, error) { @@ -83,7 +84,7 @@ func MonitoringService(params manifests.Params) (*corev1.Service, error) { return nil, err } - metricsPort, err := params.OtelCol.Spec.Config.Service.MetricsPort() + _, metricsPort, err := params.OtelCol.Spec.Config.Service.MetricsEndpoint() if err != nil { return nil, err } @@ -108,6 +109,39 @@ func MonitoringService(params manifests.Params) (*corev1.Service, error) { }, nil } +func ExtensionService(params manifests.Params) (*corev1.Service, error) { + name := naming.ExtensionService(params.OtelCol.Name) + labels := manifestutils.Labels(params.OtelCol.ObjectMeta, name, params.OtelCol.Spec.Image, ComponentOpenTelemetryCollector, []string{}) + labels[serviceTypeLabel] = ExtensionServiceType.String() + + annotations, err := manifestutils.Annotations(params.OtelCol, params.Config.AnnotationsFilter()) + if err != nil { + return nil, err + } + + ports, err := params.OtelCol.Spec.Config.GetExtensionPorts(params.Log) + if err != nil { + return nil, err + } + + if len(ports) == 0 { + return nil, nil + } + + return &corev1.Service{ + ObjectMeta: metav1.ObjectMeta{ + Name: name, + Namespace: params.OtelCol.Namespace, + Labels: labels, + Annotations: annotations, + }, + Spec: corev1.ServiceSpec{ + Ports: ports, + Selector: manifestutils.SelectorLabels(params.OtelCol.ObjectMeta, ComponentOpenTelemetryCollector), + }, + }, nil +} + func Service(params manifests.Params) (*corev1.Service, error) { name := naming.Service(params.OtelCol.Name) labels := manifestutils.Labels(params.OtelCol.ObjectMeta, name, params.OtelCol.Spec.Image, ComponentOpenTelemetryCollector, []string{}) @@ -118,7 +152,7 @@ func Service(params manifests.Params) (*corev1.Service, error) { return nil, err } - ports, err := params.OtelCol.Spec.Config.GetAllPorts(params.Log) + ports, err := params.OtelCol.Spec.Config.GetReceiverAndExporterPorts(params.Log) if err != nil { return nil, err } diff --git a/internal/manifests/collector/service_test.go b/internal/manifests/collector/service_test.go index 11ac981585..7a9695e594 100644 --- a/internal/manifests/collector/service_test.go +++ b/internal/manifests/collector/service_test.go @@ -26,6 +26,7 @@ import ( "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/manifests" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" + "github.com/open-telemetry/opentelemetry-operator/internal/naming" ) func TestExtractPortNumbersAndNames(t *testing.T) { @@ -321,6 +322,206 @@ func TestMonitoringService(t *testing.T) { }) } +func TestExtensionService(t *testing.T) { + testCases := []struct { + name string + params manifests.Params + expectedPorts []v1.ServicePort + }{ + { + name: "when the extension has http endpoint", + params: manifests.Params{ + Config: config.Config{}, + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Config: v1beta1.Config{ + Service: v1beta1.Service{ + Extensions: []string{"jaeger_query"}, + }, + Extensions: &v1beta1.AnyConfig{ + Object: map[string]interface{}{ + "jaeger_query": map[string]interface{}{ + "http": map[string]interface{}{ + "endpoint": "0.0.0.0:16686", + }, + }, + }, + }, + }, + }, + }, + }, + expectedPorts: []v1.ServicePort{ + { + Name: "jaeger-query", + Port: 16686, + TargetPort: intstr.IntOrString{ + IntVal: 16686, + }, + }, + }, + }, + { + name: "when the extension has grpc endpoint", + params: manifests.Params{ + Config: config.Config{}, + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Config: v1beta1.Config{ + Service: v1beta1.Service{ + Extensions: []string{"jaeger_query"}, + }, + Extensions: &v1beta1.AnyConfig{ + Object: map[string]interface{}{ + "jaeger_query": map[string]interface{}{ + "http": map[string]interface{}{ + "endpoint": "0.0.0.0:16686", + }, + }, + }, + }, + }, + }, + }, + }, + expectedPorts: []v1.ServicePort{ + { + Name: "jaeger-query", + Port: 16686, + TargetPort: intstr.IntOrString{ + IntVal: 16686, + }, + }, + }, + }, + { + name: "when the extension has both http and grpc endpoint", + params: manifests.Params{ + Config: config.Config{}, + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Config: v1beta1.Config{ + Service: v1beta1.Service{ + Extensions: []string{"jaeger_query"}, + }, + Extensions: &v1beta1.AnyConfig{ + Object: map[string]interface{}{ + "jaeger_query": map[string]interface{}{ + "http": map[string]interface{}{ + "endpoint": "0.0.0.0:16686", + }, + "grpc": map[string]interface{}{ + "endpoint": "0.0.0.0:16686", + }, + }, + }, + }, + }, + }, + }, + }, + expectedPorts: []v1.ServicePort{ + { + Name: "jaeger-query", + Port: 16686, + TargetPort: intstr.IntOrString{ + IntVal: 16686, + }, + }, + }, + }, + { + name: "when the extension has no extensions defined", + params: manifests.Params{ + Config: config.Config{}, + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Config: v1beta1.Config{ + Service: v1beta1.Service{ + Extensions: []string{"jaeger_query"}, + }, + Extensions: &v1beta1.AnyConfig{ + Object: map[string]interface{}{}, + }, + }, + }, + }, + }, + expectedPorts: []v1.ServicePort{}, + }, + { + name: "when the extension has no endpoint defined", + params: manifests.Params{ + Config: config.Config{}, + Log: logger, + OtelCol: v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Config: v1beta1.Config{ + Service: v1beta1.Service{ + Extensions: []string{"jaeger_query"}, + }, + Extensions: &v1beta1.AnyConfig{ + Object: map[string]interface{}{ + "jaeger_query": map[string]interface{}{}, + }, + }, + }, + }, + }, + }, + expectedPorts: []v1.ServicePort{ + { + Name: "jaeger-query", + Port: 16686, + TargetPort: intstr.IntOrString{ + IntVal: 16686, + }, + }, + }, + }, + } + + for _, tc := range testCases { + tc := tc + t.Run(tc.name, func(t *testing.T) { + actual, err := ExtensionService(tc.params) + assert.NoError(t, err) + + if len(tc.expectedPorts) > 0 { + assert.NotNil(t, actual) + assert.Equal(t, actual.Name, naming.ExtensionService(tc.params.OtelCol.Name)) + // ports assertion + assert.Equal(t, len(tc.expectedPorts), len(actual.Spec.Ports)) + assert.Equal(t, tc.expectedPorts[0].Name, actual.Spec.Ports[0].Name) + assert.Equal(t, tc.expectedPorts[0].Port, actual.Spec.Ports[0].Port) + assert.Equal(t, tc.expectedPorts[0].TargetPort.IntVal, actual.Spec.Ports[0].TargetPort.IntVal) + } else { + // no ports, no service + assert.Nil(t, actual) + } + }) + } +} + func service(name string, ports []v1beta1.PortsSpec) v1.Service { return serviceWithInternalTrafficPolicy(name, ports, v1.ServiceInternalTrafficPolicyCluster) } diff --git a/internal/manifests/collector/statefulset.go b/internal/manifests/collector/statefulset.go index 6b7e92ec05..3a98611c4e 100644 --- a/internal/manifests/collector/statefulset.go +++ b/internal/manifests/collector/statefulset.go @@ -73,9 +73,10 @@ func StatefulSet(params manifests.Params) (*appsv1.StatefulSet, error) { TopologySpreadConstraints: params.OtelCol.Spec.TopologySpreadConstraints, }, }, - Replicas: params.OtelCol.Spec.Replicas, - PodManagementPolicy: "Parallel", - VolumeClaimTemplates: VolumeClaimTemplates(params.OtelCol), + Replicas: params.OtelCol.Spec.Replicas, + PodManagementPolicy: "Parallel", + VolumeClaimTemplates: VolumeClaimTemplates(params.OtelCol), + PersistentVolumeClaimRetentionPolicy: params.OtelCol.Spec.PersistentVolumeClaimRetentionPolicy, }, }, nil } diff --git a/internal/manifests/collector/statefulset_test.go b/internal/manifests/collector/statefulset_test.go index 916e25e4bb..1963afe131 100644 --- a/internal/manifests/collector/statefulset_test.go +++ b/internal/manifests/collector/statefulset_test.go @@ -178,6 +178,45 @@ func TestStatefulSetVolumeClaimTemplates(t *testing.T) { assert.Equal(t, resource.MustParse("1Gi"), ss.Spec.VolumeClaimTemplates[0].Spec.Resources.Requests["storage"]) } +func TestStatefulSetPeristentVolumeRetentionPolicy(t *testing.T) { + // prepare + otelcol := v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "my-instance", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Mode: "statefulset", + StatefulSetCommonFields: v1beta1.StatefulSetCommonFields{ + PersistentVolumeClaimRetentionPolicy: &appsv1.StatefulSetPersistentVolumeClaimRetentionPolicy{ + WhenDeleted: appsv1.RetainPersistentVolumeClaimRetentionPolicyType, + WhenScaled: appsv1.DeletePersistentVolumeClaimRetentionPolicyType, + }, + }, + }, + } + cfg := config.New() + + params := manifests.Params{ + OtelCol: otelcol, + Config: cfg, + Log: logger, + } + + // test + ss, err := StatefulSet(params) + require.NoError(t, err) + + // assert PersistentVolumeClaimRetentionPolicy added + assert.NotNil(t, ss.Spec.PersistentVolumeClaimRetentionPolicy) + + // assert correct WhenDeleted value + assert.Equal(t, ss.Spec.PersistentVolumeClaimRetentionPolicy.WhenDeleted, appsv1.RetainPersistentVolumeClaimRetentionPolicyType) + + // assert correct WhenScaled value + assert.Equal(t, ss.Spec.PersistentVolumeClaimRetentionPolicy.WhenScaled, appsv1.DeletePersistentVolumeClaimRetentionPolicyType) + +} + func TestStatefulSetPodAnnotations(t *testing.T) { // prepare testPodAnnotationValues := map[string]string{"annotation-key": "annotation-value"} diff --git a/internal/manifests/collector/targetallocator_test.go b/internal/manifests/collector/targetallocator_test.go index 3d281d69fd..77d6e4e6f7 100644 --- a/internal/manifests/collector/targetallocator_test.go +++ b/internal/manifests/collector/targetallocator_test.go @@ -45,17 +45,6 @@ func TestTargetAllocator(t *testing.T) { privileged := true runAsUser := int64(1337) runasGroup := int64(1338) - otelcolConfig := v1beta1.Config{ - Receivers: v1beta1.AnyConfig{ - Object: map[string]interface{}{ - "prometheus": map[string]any{ - "config": map[string]any{ - "scrape_configs": []any{}, - }, - }, - }, - }, - } testCases := []struct { name string @@ -79,7 +68,6 @@ func TestTargetAllocator(t *testing.T) { input: v1beta1.OpenTelemetryCollector{ ObjectMeta: objectMetadata, Spec: v1beta1.OpenTelemetryCollectorSpec{ - Config: otelcolConfig, TargetAllocator: v1beta1.TargetAllocatorEmbedded{ Enabled: true, }, @@ -87,7 +75,9 @@ func TestTargetAllocator(t *testing.T) { }, want: &v1alpha1.TargetAllocator{ ObjectMeta: objectMetadata, - Spec: v1alpha1.TargetAllocatorSpec{}, + Spec: v1alpha1.TargetAllocatorSpec{ + GlobalConfig: v1beta1.AnyConfig{}, + }, }, }, { @@ -190,7 +180,6 @@ func TestTargetAllocator(t *testing.T) { }, }, }, - Config: otelcolConfig, }, }, want: &v1alpha1.TargetAllocator{ diff --git a/internal/manifests/collector/volume.go b/internal/manifests/collector/volume.go index ea033b3a4a..f1bd201056 100644 --- a/internal/manifests/collector/volume.go +++ b/internal/manifests/collector/volume.go @@ -19,9 +19,11 @@ import ( corev1 "k8s.io/api/core/v1" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) // Volumes builds the volumes for the given instance, including the config map volume. @@ -41,6 +43,17 @@ func Volumes(cfg config.Config, otelcol v1beta1.OpenTelemetryCollector) []corev1 }, }} + if cfg.CertManagerAvailability() == certmanager.Available && featuregate.EnableTargetAllocatorMTLS.IsEnabled() { + volumes = append(volumes, corev1.Volume{ + Name: naming.TAClientCertificate(otelcol.Name), + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: naming.TAClientCertificateSecretName(otelcol.Name), + }, + }, + }) + } + if len(otelcol.Spec.Volumes) > 0 { volumes = append(volumes, otelcol.Spec.Volumes...) } diff --git a/internal/manifests/collector/volume_test.go b/internal/manifests/collector/volume_test.go index 06832e6314..03747d519e 100644 --- a/internal/manifests/collector/volume_test.go +++ b/internal/manifests/collector/volume_test.go @@ -18,12 +18,17 @@ import ( "testing" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + colfg "go.opentelemetry.io/collector/featuregate" corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" . "github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) func TestVolumeNewDefault(t *testing.T) { @@ -89,3 +94,50 @@ func TestVolumeWithMoreConfigMaps(t *testing.T) { assert.Equal(t, "configmap-configmap-test", volumes[1].Name) assert.Equal(t, "configmap-configmap-test2", volumes[2].Name) } + +func TestVolumeWithTargetAllocatorMTLS(t *testing.T) { + t.Run("CertManager available and EnableTargetAllocatorMTLS enabled", func(t *testing.T) { + otelcol := v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-collector", + }, + } + cfg := config.New(config.WithCertManagerAvailability(certmanager.Available)) + + flgs := featuregate.Flags(colfg.GlobalRegistry()) + err := flgs.Parse([]string{"--feature-gates=operator.targetallocator.mtls"}) + require.NoError(t, err) + + volumes := Volumes(cfg, otelcol) + + expectedVolume := corev1.Volume{ + Name: naming.TAClientCertificate(otelcol.Name), + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: naming.TAClientCertificateSecretName(otelcol.Name), + }, + }, + } + assert.Contains(t, volumes, expectedVolume) + }) + + t.Run("CertManager not available", func(t *testing.T) { + otelcol := v1beta1.OpenTelemetryCollector{} + cfg := config.New(config.WithCertManagerAvailability(certmanager.NotAvailable)) + + flgs := featuregate.Flags(colfg.GlobalRegistry()) + err := flgs.Parse([]string{"--feature-gates=operator.targetallocator.mtls"}) + require.NoError(t, err) + + volumes := Volumes(cfg, otelcol) + assert.NotContains(t, volumes, corev1.Volume{Name: naming.TAClientCertificate(otelcol.Name)}) + }) + + t.Run("EnableTargetAllocatorMTLS disabled", func(t *testing.T) { + otelcol := v1beta1.OpenTelemetryCollector{} + cfg := config.New(config.WithCertManagerAvailability(certmanager.Available)) + + volumes := Volumes(cfg, otelcol) + assert.NotContains(t, volumes, corev1.Volume{Name: naming.TAClientCertificate(otelcol.Name)}) + }) +} diff --git a/internal/manifests/mutate.go b/internal/manifests/mutate.go index 75c1a07804..fda0e22dbb 100644 --- a/internal/manifests/mutate.go +++ b/internal/manifests/mutate.go @@ -15,11 +15,11 @@ package manifests import ( - "errors" "fmt" "reflect" "dario.cat/mergo" + cmv1 "github.com/cert-manager/cert-manager/pkg/apis/certmanager/v1" routev1 "github.com/openshift/api/route/v1" monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" appsv1 "k8s.io/api/apps/v1" @@ -31,10 +31,20 @@ import ( apiequality "k8s.io/apimachinery/pkg/api/equality" "sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + + "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" ) +type ImmutableFieldChangeErr struct { + Field string +} + +func (e *ImmutableFieldChangeErr) Error() string { + return fmt.Sprintf("Immutable field change attempted: %s", e.Field) +} + var ( - ImmutableChangeErr = errors.New("immutable field change attempted") + ImmutableChangeErr *ImmutableFieldChangeErr ) // MutateFuncFor returns a mutate function based on the @@ -55,6 +65,7 @@ var ( // - HorizontalPodAutoscaler // - Route // - Secret +// - TargetAllocator // In order for the operator to reconcile other types, they must be added here. // The function returned takes no arguments but instead uses the existing and desired inputs here. Existing is expected // to be set by the controller-runtime package through a client get call. @@ -166,6 +177,21 @@ func MutateFuncFor(existing, desired client.Object) controllerutil.MutateFn { wantPr := desired.(*corev1.Secret) mutateSecret(pr, wantPr) + case *cmv1.Certificate: + cert := existing.(*cmv1.Certificate) + wantCert := desired.(*cmv1.Certificate) + mutateCertificate(cert, wantCert) + + case *cmv1.Issuer: + issuer := existing.(*cmv1.Issuer) + wantIssuer := desired.(*cmv1.Issuer) + mutateIssuer(issuer, wantIssuer) + + case *v1alpha1.TargetAllocator: + ta := existing.(*v1alpha1.TargetAllocator) + wantTa := desired.(*v1alpha1.TargetAllocator) + mutateTargetAllocator(ta, wantTa) + default: t := reflect.TypeOf(existing).String() return fmt.Errorf("missing mutate implementation for resource type: %s", t) @@ -178,10 +204,6 @@ func mergeWithOverride(dst, src interface{}) error { return mergo.Merge(dst, src, mergo.WithOverride) } -func mergeWithOverwriteWithEmptyValue(dst, src interface{}) error { - return mergo.Merge(dst, src, mergo.WithOverwriteWithEmptyValue) -} - func mutateSecret(existing, desired *corev1.Secret) { existing.Labels = desired.Labels existing.Annotations = desired.Annotations @@ -259,90 +281,131 @@ func mutatePodMonitor(existing, desired *monitoringv1.PodMonitor) { existing.Spec = desired.Spec } +func mutateTargetAllocator(existing, desired *v1alpha1.TargetAllocator) { + existing.Annotations = desired.Annotations + existing.Labels = desired.Labels + existing.Spec = desired.Spec +} + func mutateService(existing, desired *corev1.Service) { existing.Spec.Ports = desired.Spec.Ports existing.Spec.Selector = desired.Spec.Selector } func mutateDaemonset(existing, desired *appsv1.DaemonSet) error { - if !existing.CreationTimestamp.IsZero() && !apiequality.Semantic.DeepEqual(desired.Spec.Selector, existing.Spec.Selector) { - return ImmutableChangeErr - } - // Daemonset selector is immutable so we set this value only if - // a new object is going to be created - if existing.CreationTimestamp.IsZero() { - existing.Spec.Selector = desired.Spec.Selector - } - if err := mergeWithOverride(&existing.Spec, desired.Spec); err != nil { - return err + if !existing.CreationTimestamp.IsZero() { + if !apiequality.Semantic.DeepEqual(desired.Spec.Selector, existing.Spec.Selector) { + return &ImmutableFieldChangeErr{Field: "Spec.Selector"} + } + if err := hasImmutableLabelChange(existing.Spec.Selector.MatchLabels, desired.Spec.Template.Labels); err != nil { + return err + } } - if err := mergeWithOverwriteWithEmptyValue(&existing.Spec.Template.Spec.NodeSelector, desired.Spec.Template.Spec.NodeSelector); err != nil { + + existing.Spec.MinReadySeconds = desired.Spec.MinReadySeconds + existing.Spec.RevisionHistoryLimit = desired.Spec.RevisionHistoryLimit + existing.Spec.UpdateStrategy = desired.Spec.UpdateStrategy + + if err := mutatePodTemplate(&existing.Spec.Template, &desired.Spec.Template); err != nil { return err } + return nil } func mutateDeployment(existing, desired *appsv1.Deployment) error { - if !existing.CreationTimestamp.IsZero() && !apiequality.Semantic.DeepEqual(desired.Spec.Selector, existing.Spec.Selector) { - return ImmutableChangeErr - } - // Deployment selector is immutable so we set this value only if - // a new object is going to be created - if existing.CreationTimestamp.IsZero() { - existing.Spec.Selector = desired.Spec.Selector + if !existing.CreationTimestamp.IsZero() { + if !apiequality.Semantic.DeepEqual(desired.Spec.Selector, existing.Spec.Selector) { + return &ImmutableFieldChangeErr{Field: "Spec.Selector"} + } + if err := hasImmutableLabelChange(existing.Spec.Selector.MatchLabels, desired.Spec.Template.Labels); err != nil { + return err + } } + + existing.Spec.MinReadySeconds = desired.Spec.MinReadySeconds + existing.Spec.Paused = desired.Spec.Paused + existing.Spec.ProgressDeadlineSeconds = desired.Spec.ProgressDeadlineSeconds existing.Spec.Replicas = desired.Spec.Replicas - if err := mergeWithOverride(&existing.Spec.Template, desired.Spec.Template); err != nil { - return err - } - if err := mergeWithOverwriteWithEmptyValue(&existing.Spec.Template.Spec.NodeSelector, desired.Spec.Template.Spec.NodeSelector); err != nil { - return err - } - if err := mergeWithOverride(&existing.Spec.Strategy, desired.Spec.Strategy); err != nil { + existing.Spec.RevisionHistoryLimit = desired.Spec.RevisionHistoryLimit + existing.Spec.Strategy = desired.Spec.Strategy + + if err := mutatePodTemplate(&existing.Spec.Template, &desired.Spec.Template); err != nil { return err } + return nil } func mutateStatefulSet(existing, desired *appsv1.StatefulSet) error { - if hasChange, field := hasImmutableFieldChange(existing, desired); hasChange { - return fmt.Errorf("%s is being changed, %w", field, ImmutableChangeErr) - } - // StatefulSet selector is immutable so we set this value only if - // a new object is going to be created - if existing.CreationTimestamp.IsZero() { - existing.Spec.Selector = desired.Spec.Selector + if !existing.CreationTimestamp.IsZero() { + if !apiequality.Semantic.DeepEqual(desired.Spec.Selector, existing.Spec.Selector) { + return &ImmutableFieldChangeErr{Field: "Spec.Selector"} + } + if err := hasImmutableLabelChange(existing.Spec.Selector.MatchLabels, desired.Spec.Template.Labels); err != nil { + return err + } + if hasVolumeClaimsTemplatesChanged(existing, desired) { + return &ImmutableFieldChangeErr{Field: "Spec.VolumeClaimTemplates"} + } } + + existing.Spec.MinReadySeconds = desired.Spec.MinReadySeconds + existing.Spec.Ordinals = desired.Spec.Ordinals + existing.Spec.PersistentVolumeClaimRetentionPolicy = desired.Spec.PersistentVolumeClaimRetentionPolicy existing.Spec.PodManagementPolicy = desired.Spec.PodManagementPolicy existing.Spec.Replicas = desired.Spec.Replicas + existing.Spec.RevisionHistoryLimit = desired.Spec.RevisionHistoryLimit + existing.Spec.ServiceName = desired.Spec.ServiceName + existing.Spec.UpdateStrategy = desired.Spec.UpdateStrategy for i := range existing.Spec.VolumeClaimTemplates { existing.Spec.VolumeClaimTemplates[i].TypeMeta = desired.Spec.VolumeClaimTemplates[i].TypeMeta existing.Spec.VolumeClaimTemplates[i].ObjectMeta = desired.Spec.VolumeClaimTemplates[i].ObjectMeta existing.Spec.VolumeClaimTemplates[i].Spec = desired.Spec.VolumeClaimTemplates[i].Spec } - if err := mergeWithOverride(&existing.Spec.Template, desired.Spec.Template); err != nil { - return err - } - if err := mergeWithOverwriteWithEmptyValue(&existing.Spec.Template.Spec.NodeSelector, desired.Spec.Template.Spec.NodeSelector); err != nil { + + if err := mutatePodTemplate(&existing.Spec.Template, &desired.Spec.Template); err != nil { return err } + return nil } -func hasImmutableFieldChange(existing, desired *appsv1.StatefulSet) (bool, string) { - if existing.CreationTimestamp.IsZero() { - return false, "" - } - if !apiequality.Semantic.DeepEqual(desired.Spec.Selector, existing.Spec.Selector) { - return true, fmt.Sprintf("Spec.Selector: desired: %s existing: %s", desired.Spec.Selector, existing.Spec.Selector) +func mutateCertificate(existing, desired *cmv1.Certificate) { + existing.Annotations = desired.Annotations + existing.Labels = desired.Labels + existing.Spec = desired.Spec +} + +func mutateIssuer(existing, desired *cmv1.Issuer) { + existing.Annotations = desired.Annotations + existing.Labels = desired.Labels + existing.Spec = desired.Spec +} + +func mutatePodTemplate(existing, desired *corev1.PodTemplateSpec) error { + if err := mergeWithOverride(&existing.Labels, desired.Labels); err != nil { + return err } - if hasVolumeClaimsTemplatesChanged(existing, desired) { - return true, "Spec.VolumeClaimTemplates" + if err := mergeWithOverride(&existing.Annotations, desired.Annotations); err != nil { + return err } - return false, "" + existing.Spec = desired.Spec + + return nil + +} + +func hasImmutableLabelChange(existingSelectorLabels, desiredLabels map[string]string) error { + for k, v := range existingSelectorLabels { + if vv, ok := desiredLabels[k]; !ok || vv != v { + return &ImmutableFieldChangeErr{Field: "Spec.Template.Metadata.Labels"} + } + } + return nil } // hasVolumeClaimsTemplatesChanged if volume claims template change has been detected. @@ -365,6 +428,9 @@ func hasVolumeClaimsTemplatesChanged(existing, desired *appsv1.StatefulSet) bool if !apiequality.Semantic.DeepEqual(desired.Spec.VolumeClaimTemplates[i].Annotations, existing.Spec.VolumeClaimTemplates[i].Annotations) { return true } + if !apiequality.Semantic.DeepEqual(desired.Spec.VolumeClaimTemplates[i].Labels, existing.Spec.VolumeClaimTemplates[i].Labels) { + return true + } if !apiequality.Semantic.DeepEqual(desired.Spec.VolumeClaimTemplates[i].Spec, existing.Spec.VolumeClaimTemplates[i].Spec) { return true } diff --git a/internal/manifests/mutate_test.go b/internal/manifests/mutate_test.go index c165c8535c..6009aa007d 100644 --- a/internal/manifests/mutate_test.go +++ b/internal/manifests/mutate_test.go @@ -19,6 +19,7 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + appsv1 "k8s.io/api/apps/v1" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) @@ -48,3 +49,2446 @@ func TestMutateServiceAccount(t *testing.T) { }, }, existing) } + +func TestMutateDaemonsetAdditionalContainers(t *testing.T) { + tests := []struct { + name string + existing appsv1.DaemonSet + desired appsv1.DaemonSet + }{ + { + name: "add container to daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "remove container from daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "modify container in daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:1.0", + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing, tt.desired) + }) + } +} + +func TestMutateDeploymentAdditionalContainers(t *testing.T) { + tests := []struct { + name string + existing appsv1.Deployment + desired appsv1.Deployment + }{ + { + name: "add container to deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "remove container from deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "modify container in deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:1.0", + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing, tt.desired) + }) + } +} + +func TestMutateStatefulSetAdditionalContainers(t *testing.T) { + tests := []struct { + name string + existing appsv1.StatefulSet + desired appsv1.StatefulSet + }{ + { + name: "add container to statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "remove container from statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "modify container in statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + { + Name: "alpine", + Image: "alpine:1.0", + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing, tt.desired) + }) + } +} + +func TestMutateDaemonsetAffinity(t *testing.T) { + tests := []struct { + name string + existing appsv1.DaemonSet + desired appsv1.DaemonSet + }{ + { + name: "add affinity to daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"linux"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + { + name: "remove affinity from daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"linux"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "modify affinity in daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"linux"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"windows"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing, tt.desired) + }) + } +} + +func TestMutateDeploymentAffinity(t *testing.T) { + tests := []struct { + name string + existing appsv1.Deployment + desired appsv1.Deployment + }{ + { + name: "add affinity to deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"linux"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + { + name: "remove affinity from deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"linux"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "modify affinity in deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"linux"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"windows"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing, tt.desired) + }) + } +} + +func TestMutateStatefulSetAffinity(t *testing.T) { + tests := []struct { + name string + existing appsv1.StatefulSet + desired appsv1.StatefulSet + }{ + { + name: "add affinity to statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"linux"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + { + name: "remove affinity from statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"linux"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "modify affinity in statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"linux"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchFields: []corev1.NodeSelectorRequirement{ + { + Key: "kubernetes.io/os", + Operator: corev1.NodeSelectorOpIn, + Values: []string{"windows"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing, tt.desired) + }) + } +} + +func TestMutateDaemonsetCollectorArgs(t *testing.T) { + tests := []struct { + name string + existing appsv1.DaemonSet + desired appsv1.DaemonSet + }{ + { + name: "add argument to collector container in daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true"}, + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=yes"}, + }, + }, + }, + }, + }, + }, + }, + { + name: "remove extra arg from collector container in daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=yes"}, + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true"}, + }, + }, + }, + }, + }, + }, + }, + { + name: "modify extra arg in collector container in daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=yes"}, + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=no"}, + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing, tt.desired) + }) + } +} + +func TestMutateDeploymentCollectorArgs(t *testing.T) { + tests := []struct { + name string + existing appsv1.Deployment + desired appsv1.Deployment + }{ + { + name: "add argument to collector container in deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true"}, + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=yes"}, + }, + }, + }, + }, + }, + }, + }, + { + name: "remove extra arg from collector container in deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=yes"}, + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true"}, + }, + }, + }, + }, + }, + }, + }, + { + name: "modify extra arg in collector container in deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=yes"}, + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=no"}, + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing, tt.desired) + }) + } +} + +func TestMutateStatefulSetCollectorArgs(t *testing.T) { + tests := []struct { + name string + existing appsv1.StatefulSet + desired appsv1.StatefulSet + }{ + { + name: "add argument to collector container in statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true"}, + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=yes"}, + }, + }, + }, + }, + }, + }, + }, + { + name: "remove extra arg from collector container in statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=yes"}, + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true"}, + }, + }, + }, + }, + }, + }, + }, + { + name: "modify extra arg in collector container in statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=yes"}, + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + Args: []string{"--default-arg=true", "extra-arg=no"}, + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing, tt.desired) + }) + } +} + +func TestNoImmutableLabelChange(t *testing.T) { + existingSelectorLabels := map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + } + desiredLabels := map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "extra-label": "true", + } + err := hasImmutableLabelChange(existingSelectorLabels, desiredLabels) + require.NoError(t, err) + assert.NoError(t, err) +} + +func TestHasImmutableLabelChange(t *testing.T) { + existingSelectorLabels := map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + } + desiredLabels := map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "not-opentelemetry", + } + err := hasImmutableLabelChange(existingSelectorLabels, desiredLabels) + assert.Error(t, err) +} + +func TestMissingImmutableLabelChange(t *testing.T) { + existingSelectorLabels := map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + } + desiredLabels := map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + } + err := hasImmutableLabelChange(existingSelectorLabels, desiredLabels) + assert.Error(t, err) +} + +func TestMutateDaemonsetError(t *testing.T) { + tests := []struct { + name string + existing appsv1.DaemonSet + desired appsv1.DaemonSet + }{ + { + name: "modified immutable label in daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "not-opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "modified immutable selector in daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "not-opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + assert.Error(t, err) + }) + } +} + +func TestMutateDeploymentError(t *testing.T) { + tests := []struct { + name string + existing appsv1.Deployment + desired appsv1.Deployment + }{ + { + name: "modified immutable label in deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "not-opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "modified immutable selector in deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "not-opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + assert.Error(t, err) + }) + } +} + +func TestMutateStatefulSetError(t *testing.T) { + tests := []struct { + name string + existing appsv1.StatefulSet + desired appsv1.StatefulSet + }{ + { + name: "modified immutable label in statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "not-opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "modified immutable selector in statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "not-opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + assert.Error(t, err) + }) + } +} + +func TestMutateDaemonsetLabelChange(t *testing.T) { + tests := []struct { + name string + existing appsv1.DaemonSet + desired appsv1.DaemonSet + }{ + { + name: "modified label in daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "existing", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "desired", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "new label in daemonset", + existing: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "existing", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.DaemonSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daemonset", + }, + Spec: appsv1.DaemonSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.daemonset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "existing", + "new-user-label": "desired", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing.Spec, tt.desired.Spec) + }) + } +} + +func TestMutateDeploymentLabelChange(t *testing.T) { + tests := []struct { + name string + existing appsv1.Deployment + desired appsv1.Deployment + }{ + { + name: "modified label in deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "existing", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "desired", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "new label in deployment", + existing: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "existing", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deployment", + }, + Spec: appsv1.DeploymentSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.deployment", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "existing", + "new-user-label": "desired", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing.Spec, tt.desired.Spec) + }) + } +} + +func TestMutateStatefulSetLabelChange(t *testing.T) { + tests := []struct { + name string + existing appsv1.StatefulSet + desired appsv1.StatefulSet + }{ + { + name: "modified label in statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "existing", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "desired", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + { + name: "new label in statefulset", + existing: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + CreationTimestamp: metav1.Now(), + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "existing", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + desired: appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "statefulset", + }, + Spec: appsv1.StatefulSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + }, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "app.kubernetes.io/component": "opentelemetry-collector", + "app.kubernetes.io/instance": "default.statefulset", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "user-label": "existing", + "new-user-label": "desired", + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "collector", + Image: "collector:latest", + }, + }, + }, + }, + }, + }, + }, + } + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + mutateFn := MutateFuncFor(&tt.existing, &tt.desired) + err := mutateFn() + require.NoError(t, err) + assert.Equal(t, tt.existing.Spec, tt.desired.Spec) + }) + } +} diff --git a/internal/manifests/params.go b/internal/manifests/params.go index 69be71fb0b..4f18b74591 100644 --- a/internal/manifests/params.go +++ b/internal/manifests/params.go @@ -23,6 +23,7 @@ import ( "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" "github.com/open-telemetry/opentelemetry-operator/internal/config" + "github.com/open-telemetry/opentelemetry-operator/internal/rbac" ) // Params holds the reconciliation-specific parameters. @@ -35,4 +36,6 @@ type Params struct { TargetAllocator *v1alpha1.TargetAllocator OpAMPBridge v1alpha1.OpAMPBridge Config config.Config + Reviewer rbac.SAReviewer + ErrorAsWarning bool } diff --git a/internal/manifests/targetallocator/adapters/config_to_prom_config.go b/internal/manifests/targetallocator/adapters/config_to_prom_config.go index e0d7cd38e2..6395c6f2a0 100644 --- a/internal/manifests/targetallocator/adapters/config_to_prom_config.go +++ b/internal/manifests/targetallocator/adapters/config_to_prom_config.go @@ -23,6 +23,8 @@ import ( "github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector/adapters" ) +type TAOption func(targetAllocatorCfg map[interface{}]interface{}) error + func errorNoComponent(component string) error { return fmt.Errorf("no %s available as part of the configuration", component) } @@ -257,10 +259,31 @@ func AddHTTPSDConfigToPromConfig(prometheus map[interface{}]interface{}, taServi return prometheus, nil } +func WithTLSConfig(caFile, certFile, keyFile, taServiceName string) TAOption { + return func(targetAllocatorCfg map[interface{}]interface{}) error { + if _, exists := targetAllocatorCfg["tls"]; !exists { + targetAllocatorCfg["tls"] = make(map[interface{}]interface{}) + } + + tlsCfg, ok := targetAllocatorCfg["tls"].(map[interface{}]interface{}) + if !ok { + return errorNotAMap("tls") + } + + tlsCfg["ca_file"] = caFile + tlsCfg["cert_file"] = certFile + tlsCfg["key_file"] = keyFile + + targetAllocatorCfg["endpoint"] = fmt.Sprintf("https://%s:443", taServiceName) + + return nil + } +} + // AddTAConfigToPromConfig adds or updates the target_allocator configuration in the Prometheus configuration. // If the `EnableTargetAllocatorRewrite` feature flag for the target allocator is enabled, this function // removes the existing scrape_configs from the collector's Prometheus configuration as it's not required. -func AddTAConfigToPromConfig(prometheus map[interface{}]interface{}, taServiceName string) (map[interface{}]interface{}, error) { +func AddTAConfigToPromConfig(prometheus map[interface{}]interface{}, taServiceName string, taOpts ...TAOption) (map[interface{}]interface{}, error) { prometheusConfigProperty, ok := prometheus["config"] if !ok { return nil, errorNoComponent("prometheusConfig") @@ -285,6 +308,13 @@ func AddTAConfigToPromConfig(prometheus map[interface{}]interface{}, taServiceNa targetAllocatorCfg["interval"] = "30s" targetAllocatorCfg["collector_id"] = "${POD_NAME}" + for _, opt := range taOpts { + err := opt(targetAllocatorCfg) + if err != nil { + return nil, err + } + } + // Remove the scrape_configs key from the map delete(prometheusCfg, "scrape_configs") diff --git a/internal/manifests/targetallocator/adapters/config_to_prom_config_test.go b/internal/manifests/targetallocator/adapters/config_to_prom_config_test.go index 2ad7b741c6..b06d1ed67a 100644 --- a/internal/manifests/targetallocator/adapters/config_to_prom_config_test.go +++ b/internal/manifests/targetallocator/adapters/config_to_prom_config_test.go @@ -518,3 +518,45 @@ func TestValidateTargetAllocatorConfig(t *testing.T) { }) } } + +func TestAddTAConfigToPromConfigWithTLSConfig(t *testing.T) { + t.Run("should return expected prom config map with TA config and TLS config", func(t *testing.T) { + cfg := map[interface{}]interface{}{ + "config": map[interface{}]interface{}{ + "scrape_configs": []interface{}{ + map[interface{}]interface{}{ + "job_name": "test_job", + "static_configs": []interface{}{ + map[interface{}]interface{}{ + "targets": []interface{}{ + "localhost:9090", + }, + }, + }, + }, + }, + }, + } + + taServiceName := "test-targetallocator" + + expectedResult := map[interface{}]interface{}{ + "config": map[interface{}]interface{}{}, + "target_allocator": map[interface{}]interface{}{ + "endpoint": "https://test-targetallocator:443", + "interval": "30s", + "collector_id": "${POD_NAME}", + "tls": map[interface{}]interface{}{ + "ca_file": "ca.crt", + "cert_file": "tls.crt", + "key_file": "tls.key", + }, + }, + } + + result, err := ta.AddTAConfigToPromConfig(cfg, taServiceName, ta.WithTLSConfig("ca.crt", "tls.crt", "tls.key", taServiceName)) + + assert.NoError(t, err) + assert.Equal(t, expectedResult, result) + }) +} diff --git a/internal/manifests/targetallocator/certificate.go b/internal/manifests/targetallocator/certificate.go new file mode 100644 index 0000000000..46357eca23 --- /dev/null +++ b/internal/manifests/targetallocator/certificate.go @@ -0,0 +1,118 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package targetallocator + +import ( + "fmt" + + cmv1 "github.com/cert-manager/cert-manager/pkg/apis/certmanager/v1" + cmmeta "github.com/cert-manager/cert-manager/pkg/apis/meta/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" + "github.com/open-telemetry/opentelemetry-operator/internal/naming" +) + +// / CACertificate returns a CA Certificate for the given instance. +func CACertificate(params Params) *cmv1.Certificate { + name := naming.CACertificate(params.TargetAllocator.Name) + labels := manifestutils.Labels(params.TargetAllocator.ObjectMeta, name, params.TargetAllocator.Spec.Image, ComponentOpenTelemetryTargetAllocator, nil) + + return &cmv1.Certificate{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: params.TargetAllocator.Namespace, + Name: name, + Labels: labels, + }, + Spec: cmv1.CertificateSpec{ + IsCA: true, + CommonName: naming.CACertificate(params.TargetAllocator.Name), + Subject: &cmv1.X509Subject{ + OrganizationalUnits: []string{"opentelemetry-operator"}, + }, + SecretName: naming.CACertificate(params.TargetAllocator.Name), + IssuerRef: cmmeta.ObjectReference{ + Name: naming.SelfSignedIssuer(params.TargetAllocator.Name), + Kind: "Issuer", + }, + }, + } +} + +// ServingCertificate returns a serving Certificate for the given instance. +func ServingCertificate(params Params) *cmv1.Certificate { + name := naming.TAServerCertificate(params.TargetAllocator.Name) + labels := manifestutils.Labels(params.TargetAllocator.ObjectMeta, name, params.TargetAllocator.Spec.Image, ComponentOpenTelemetryTargetAllocator, nil) + + return &cmv1.Certificate{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: params.TargetAllocator.Namespace, + Name: name, + Labels: labels, + }, + Spec: cmv1.CertificateSpec{ + DNSNames: []string{ + naming.TAService(params.TargetAllocator.Name), + fmt.Sprintf("%s.%s.svc", naming.TAService(params.TargetAllocator.Name), params.TargetAllocator.Namespace), + fmt.Sprintf("%s.%s.svc.cluster.local", naming.TAService(params.TargetAllocator.Name), params.TargetAllocator.Namespace), + }, + IssuerRef: cmmeta.ObjectReference{ + Kind: "Issuer", + Name: naming.CAIssuer(params.TargetAllocator.Name), + }, + Usages: []cmv1.KeyUsage{ + cmv1.UsageClientAuth, + cmv1.UsageServerAuth, + }, + SecretName: naming.TAServerCertificate(params.TargetAllocator.Name), + Subject: &cmv1.X509Subject{ + OrganizationalUnits: []string{"opentelemetry-operator"}, + }, + }, + } +} + +// ClientCertificate returns a client Certificate for the given instance. +func ClientCertificate(params Params) *cmv1.Certificate { + name := naming.TAClientCertificate(params.TargetAllocator.Name) + labels := manifestutils.Labels(params.TargetAllocator.ObjectMeta, name, params.TargetAllocator.Spec.Image, ComponentOpenTelemetryTargetAllocator, nil) + + return &cmv1.Certificate{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: params.TargetAllocator.Namespace, + Name: name, + Labels: labels, + }, + Spec: cmv1.CertificateSpec{ + DNSNames: []string{ + naming.TAService(params.TargetAllocator.Name), + fmt.Sprintf("%s.%s.svc", naming.TAService(params.TargetAllocator.Name), params.TargetAllocator.Namespace), + fmt.Sprintf("%s.%s.svc.cluster.local", naming.TAService(params.TargetAllocator.Name), params.TargetAllocator.Namespace), + }, + IssuerRef: cmmeta.ObjectReference{ + Kind: "Issuer", + Name: naming.CAIssuer(params.TargetAllocator.Name), + }, + Usages: []cmv1.KeyUsage{ + cmv1.UsageClientAuth, + cmv1.UsageServerAuth, + }, + SecretName: naming.TAClientCertificate(params.TargetAllocator.Name), + Subject: &cmv1.X509Subject{ + OrganizationalUnits: []string{"opentelemetry-operator"}, + }, + }, + } +} diff --git a/internal/manifests/targetallocator/certificate_test.go b/internal/manifests/targetallocator/certificate_test.go new file mode 100644 index 0000000000..ae9dceb6a7 --- /dev/null +++ b/internal/manifests/targetallocator/certificate_test.go @@ -0,0 +1,221 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package targetallocator + +import ( + "testing" + + "github.com/stretchr/testify/assert" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" + "github.com/open-telemetry/opentelemetry-operator/internal/config" +) + +type CACertificateConfig struct { + Name string + Namespace string + SecretName string + IssuerName string +} + +type ServingCertificateConfig struct { + Name string + Namespace string + SecretName string + IssuerName string +} + +type ClientCertificateConfig struct { + Name string + Namespace string + SecretName string + IssuerName string +} + +func TestCACertificate(t *testing.T) { + tests := []struct { + name string + targetAllocator v1alpha1.TargetAllocator + expectedCAConfig CACertificateConfig + expectedLabels map[string]string + }{ + { + name: "Default CA Certificate", + targetAllocator: v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "my-instance", + Namespace: "my-namespace", + }, + }, + expectedCAConfig: CACertificateConfig{ + Name: "my-instance-ca-cert", + Namespace: "my-namespace", + SecretName: "my-instance-ca-cert", + IssuerName: "my-instance-self-signed-issuer", + }, + expectedLabels: map[string]string{ + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/instance": "my-namespace.my-instance", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/name": "my-instance-ca-cert", + "app.kubernetes.io/version": "latest", + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + params := Params{ + TargetAllocator: tt.targetAllocator, + Config: config.New(), + } + + caCert := CACertificate(params) + + assert.Equal(t, tt.expectedCAConfig.Name, caCert.Name) + assert.Equal(t, tt.expectedCAConfig.Namespace, caCert.Namespace) + assert.Equal(t, tt.expectedCAConfig.SecretName, caCert.Spec.SecretName) + assert.Equal(t, tt.expectedCAConfig.IssuerName, caCert.Spec.IssuerRef.Name) + assert.True(t, caCert.Spec.IsCA) + assert.Equal(t, "Issuer", caCert.Spec.IssuerRef.Kind) + assert.Equal(t, []string{"opentelemetry-operator"}, caCert.Spec.Subject.OrganizationalUnits) + assert.Equal(t, tt.expectedLabels, caCert.Labels) + }) + } +} + +func TestServingCertificate(t *testing.T) { + tests := []struct { + name string + targetAllocator v1alpha1.TargetAllocator + expectedServingConfig ServingCertificateConfig + expectedDNSNames []string + expectedOrganizationUnit []string + expectedLabels map[string]string + }{ + { + name: "Default Serving Certificate", + targetAllocator: v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "my-instance", + Namespace: "my-namespace", + }, + }, + expectedServingConfig: ServingCertificateConfig{ + Name: "my-instance-ta-server-cert", + Namespace: "my-namespace", + SecretName: "my-instance-ta-server-cert", + IssuerName: "my-instance-ca-issuer", + }, + expectedDNSNames: []string{ + "my-instance-targetallocator", + "my-instance-targetallocator.my-namespace.svc", + "my-instance-targetallocator.my-namespace.svc.cluster.local", + }, + expectedOrganizationUnit: []string{"opentelemetry-operator"}, + expectedLabels: map[string]string{ + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/instance": "my-namespace.my-instance", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/name": "my-instance-ta-server-cert", + "app.kubernetes.io/version": "latest", + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + params := Params{ + TargetAllocator: tt.targetAllocator, + Config: config.New(), + } + + servingCert := ServingCertificate(params) + + assert.Equal(t, tt.expectedServingConfig.Name, servingCert.Name) + assert.Equal(t, tt.expectedServingConfig.Namespace, servingCert.Namespace) + assert.Equal(t, tt.expectedServingConfig.SecretName, servingCert.Spec.SecretName) + assert.Equal(t, tt.expectedServingConfig.IssuerName, servingCert.Spec.IssuerRef.Name) + assert.Equal(t, "Issuer", servingCert.Spec.IssuerRef.Kind) + assert.ElementsMatch(t, tt.expectedDNSNames, servingCert.Spec.DNSNames) + assert.ElementsMatch(t, tt.expectedOrganizationUnit, servingCert.Spec.Subject.OrganizationalUnits) + assert.Equal(t, tt.expectedLabels, servingCert.Labels) + }) + } +} + +func TestClientCertificate(t *testing.T) { + tests := []struct { + name string + targetAllocator v1alpha1.TargetAllocator + expectedClientConfig ClientCertificateConfig + expectedDNSNames []string + expectedOrganizationUnit []string + expectedLabels map[string]string + }{ + { + name: "Default Client Certificate", + targetAllocator: v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "my-instance", + Namespace: "my-namespace", + }, + }, + expectedClientConfig: ClientCertificateConfig{ + Name: "my-instance-ta-client-cert", + Namespace: "my-namespace", + SecretName: "my-instance-ta-client-cert", + IssuerName: "my-instance-ca-issuer", + }, + expectedDNSNames: []string{ + "my-instance-targetallocator", + "my-instance-targetallocator.my-namespace.svc", + "my-instance-targetallocator.my-namespace.svc.cluster.local", + }, + expectedOrganizationUnit: []string{"opentelemetry-operator"}, + expectedLabels: map[string]string{ + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/instance": "my-namespace.my-instance", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/name": "my-instance-ta-client-cert", + "app.kubernetes.io/version": "latest", + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + params := Params{ + TargetAllocator: tt.targetAllocator, + Config: config.New(), + } + + clientCert := ClientCertificate(params) + + assert.Equal(t, tt.expectedClientConfig.Name, clientCert.Name) + assert.Equal(t, tt.expectedClientConfig.Namespace, clientCert.Namespace) + assert.Equal(t, tt.expectedClientConfig.SecretName, clientCert.Spec.SecretName) + assert.Equal(t, tt.expectedClientConfig.IssuerName, clientCert.Spec.IssuerRef.Name) + assert.Equal(t, "Issuer", clientCert.Spec.IssuerRef.Kind) + assert.ElementsMatch(t, tt.expectedDNSNames, clientCert.Spec.DNSNames) + assert.ElementsMatch(t, tt.expectedOrganizationUnit, clientCert.Spec.Subject.OrganizationalUnits) + assert.Equal(t, tt.expectedLabels, clientCert.Labels) + }) + } +} diff --git a/internal/manifests/targetallocator/configmap.go b/internal/manifests/targetallocator/configmap.go index 496c625ae3..36defd088e 100644 --- a/internal/manifests/targetallocator/configmap.go +++ b/internal/manifests/targetallocator/configmap.go @@ -15,16 +15,21 @@ package targetallocator import ( + "path/filepath" + "github.com/mitchellh/mapstructure" "gopkg.in/yaml.v2" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator/adapters" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/constants" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) const ( @@ -85,6 +90,11 @@ func ConfigMap(params Params) (*corev1.ConfigMap, error) { } else { taConfig["allocation_strategy"] = v1beta1.TargetAllocatorAllocationStrategyConsistentHashing } + + if featuregate.EnableTargetAllocatorFallbackStrategy.IsEnabled() { + taConfig["allocation_fallback_strategy"] = v1beta1.TargetAllocatorAllocationStrategyConsistentHashing + } + taConfig["filter_strategy"] = taSpec.FilterStrategy if taSpec.PrometheusCR.Enabled { @@ -102,6 +112,16 @@ func ConfigMap(params Params) (*corev1.ConfigMap, error) { taConfig["prometheus_cr"] = prometheusCRConfig } + if params.Config.CertManagerAvailability() == certmanager.Available && featuregate.EnableTargetAllocatorMTLS.IsEnabled() { + taConfig["https"] = map[string]interface{}{ + "enabled": true, + "listen_addr": ":8443", + "ca_file_path": filepath.Join(constants.TACollectorTLSDirPath, constants.TACollectorCAFileName), + "tls_cert_file_path": filepath.Join(constants.TACollectorTLSDirPath, constants.TACollectorTLSCertFileName), + "tls_key_file_path": filepath.Join(constants.TACollectorTLSDirPath, constants.TACollectorTLSKeyFileName), + } + } + taConfigYAML, err := yaml.Marshal(taConfig) if err != nil { return &corev1.ConfigMap{}, err diff --git a/internal/manifests/targetallocator/configmap_test.go b/internal/manifests/targetallocator/configmap_test.go index 66553bf783..967eef25e8 100644 --- a/internal/manifests/targetallocator/configmap_test.go +++ b/internal/manifests/targetallocator/configmap_test.go @@ -23,10 +23,13 @@ import ( "github.com/mitchellh/mapstructure" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + colfg "go.opentelemetry.io/collector/featuregate" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) func TestDesiredConfigMap(t *testing.T) { @@ -239,6 +242,118 @@ prometheus_cr: }) + t.Run("should return expected target allocator config map with HTTPS configuration", func(t *testing.T) { + expectedLabels["app.kubernetes.io/component"] = "opentelemetry-targetallocator" + expectedLabels["app.kubernetes.io/name"] = "my-instance-targetallocator" + + cfg := config.New(config.WithCertManagerAvailability(certmanager.Available)) + + flgs := featuregate.Flags(colfg.GlobalRegistry()) + err := flgs.Parse([]string{"--feature-gates=operator.targetallocator.mtls"}) + require.NoError(t, err) + + testParams := Params{ + Collector: collector, + TargetAllocator: targetAllocator, + Config: cfg, + } + + expectedData := map[string]string{ + targetAllocatorFilename: `allocation_strategy: consistent-hashing +collector_selector: + matchlabels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: default.my-instance + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/part-of: opentelemetry + matchexpressions: [] +config: + scrape_configs: + - job_name: otel-collector + scrape_interval: 10s + static_configs: + - targets: + - 0.0.0.0:8888 + - 0.0.0.0:9999 +filter_strategy: relabel-config +https: + ca_file_path: /tls/ca.crt + enabled: true + listen_addr: :8443 + tls_cert_file_path: /tls/tls.crt + tls_key_file_path: /tls/tls.key +prometheus_cr: + enabled: true + pod_monitor_selector: null + scrape_interval: 30s + service_monitor_selector: null +`, + } + + actual, err := ConfigMap(testParams) + assert.NoError(t, err) + + assert.Equal(t, "my-instance-targetallocator", actual.Name) + assert.Equal(t, expectedLabels, actual.Labels) + assert.Equal(t, expectedData, actual.Data) + }) + + t.Run("should return expected target allocator config map allocation fallback strategy", func(t *testing.T) { + expectedLabels["app.kubernetes.io/component"] = "opentelemetry-targetallocator" + expectedLabels["app.kubernetes.io/name"] = "my-instance-targetallocator" + + cfg := config.New(config.WithCertManagerAvailability(certmanager.Available)) + + flgs := featuregate.Flags(colfg.GlobalRegistry()) + err := flgs.Parse([]string{"--feature-gates=operator.targetallocator.fallbackstrategy"}) + require.NoError(t, err) + + testParams := Params{ + Collector: collector, + TargetAllocator: targetAllocator, + Config: cfg, + } + + expectedData := map[string]string{ + targetAllocatorFilename: `allocation_fallback_strategy: consistent-hashing +allocation_strategy: consistent-hashing +collector_selector: + matchlabels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: default.my-instance + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/part-of: opentelemetry + matchexpressions: [] +config: + scrape_configs: + - job_name: otel-collector + scrape_interval: 10s + static_configs: + - targets: + - 0.0.0.0:8888 + - 0.0.0.0:9999 +filter_strategy: relabel-config +https: + ca_file_path: /tls/ca.crt + enabled: true + listen_addr: :8443 + tls_cert_file_path: /tls/tls.crt + tls_key_file_path: /tls/tls.key +prometheus_cr: + enabled: true + pod_monitor_selector: null + scrape_interval: 30s + service_monitor_selector: null +`, + } + + actual, err := ConfigMap(testParams) + assert.NoError(t, err) + + assert.Equal(t, "my-instance-targetallocator", actual.Name) + assert.Equal(t, expectedLabels, actual.Labels) + assert.Equal(t, expectedData, actual.Data) + }) } func TestGetScrapeConfigsFromOtelConfig(t *testing.T) { diff --git a/internal/manifests/targetallocator/container.go b/internal/manifests/targetallocator/container.go index 4409912a76..f1e5e78bbc 100644 --- a/internal/manifests/targetallocator/container.go +++ b/internal/manifests/targetallocator/container.go @@ -24,8 +24,10 @@ import ( "k8s.io/apimachinery/pkg/util/intstr" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/constants" "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) @@ -128,6 +130,18 @@ func Container(cfg config.Config, logger logr.Logger, instance v1alpha1.TargetAl }, } + if cfg.CertManagerAvailability() == certmanager.Available && featuregate.EnableTargetAllocatorMTLS.IsEnabled() { + ports = append(ports, corev1.ContainerPort{ + Name: "https", + ContainerPort: 8443, + Protocol: corev1.ProtocolTCP, + }) + volumeMounts = append(volumeMounts, corev1.VolumeMount{ + Name: naming.TAServerCertificate(instance.Name), + MountPath: constants.TACollectorTLSDirPath, + }) + } + envVars = append(envVars, proxy.ReadProxyVarsFromEnv()...) return corev1.Container{ Name: naming.TAContainer(), diff --git a/internal/manifests/targetallocator/container_test.go b/internal/manifests/targetallocator/container_test.go index 6bfcce4eb9..7ce57d4257 100644 --- a/internal/manifests/targetallocator/container_test.go +++ b/internal/manifests/targetallocator/container_test.go @@ -20,6 +20,7 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + colfg "go.opentelemetry.io/collector/featuregate" corev1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/util/intstr" @@ -27,8 +28,11 @@ import ( "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/constants" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) var logger = logf.Log.WithName("unit-tests") @@ -384,6 +388,31 @@ func TestArgs(t *testing.T) { assert.Equal(t, expected, c.Args) } +func TestContainerWithCertManagerAvailable(t *testing.T) { + // prepare + targetAllocator := v1alpha1.TargetAllocator{} + + flgs := featuregate.Flags(colfg.GlobalRegistry()) + err := flgs.Parse([]string{"--feature-gates=operator.targetallocator.mtls"}) + require.NoError(t, err) + + cfg := config.New(config.WithCertManagerAvailability(certmanager.Available)) + + // test + c := Container(cfg, logger, targetAllocator) + + // verify + assert.Equal(t, "http", c.Ports[0].Name) + assert.Equal(t, int32(8080), c.Ports[0].ContainerPort) + assert.Equal(t, "https", c.Ports[1].Name) + assert.Equal(t, int32(8443), c.Ports[1].ContainerPort) + + assert.Contains(t, c.VolumeMounts, corev1.VolumeMount{ + Name: naming.TAServerCertificate(""), + MountPath: constants.TACollectorTLSDirPath, + }) +} + func TestContainerCustomVolumes(t *testing.T) { // prepare targetAllocator := v1alpha1.TargetAllocator{ diff --git a/internal/manifests/targetallocator/issuer.go b/internal/manifests/targetallocator/issuer.go new file mode 100644 index 0000000000..8732fd1376 --- /dev/null +++ b/internal/manifests/targetallocator/issuer.go @@ -0,0 +1,63 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package targetallocator + +import ( + cmv1 "github.com/cert-manager/cert-manager/pkg/apis/certmanager/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" + "github.com/open-telemetry/opentelemetry-operator/internal/naming" +) + +// SelfSignedIssuer returns a self-signed issuer for the given instance. +func SelfSignedIssuer(params Params) *cmv1.Issuer { + name := naming.SelfSignedIssuer(params.TargetAllocator.Name) + labels := manifestutils.Labels(params.TargetAllocator.ObjectMeta, name, params.TargetAllocator.Spec.Image, ComponentOpenTelemetryTargetAllocator, nil) + + return &cmv1.Issuer{ + ObjectMeta: metav1.ObjectMeta{ + Name: name, + Namespace: params.TargetAllocator.Namespace, + Labels: labels, + }, + Spec: cmv1.IssuerSpec{ + IssuerConfig: cmv1.IssuerConfig{ + SelfSigned: &cmv1.SelfSignedIssuer{}, + }, + }, + } +} + +// CAIssuer returns a CA issuer for the given instance. +func CAIssuer(params Params) *cmv1.Issuer { + name := naming.CAIssuer(params.TargetAllocator.Name) + labels := manifestutils.Labels(params.TargetAllocator.ObjectMeta, name, params.TargetAllocator.Spec.Image, ComponentOpenTelemetryTargetAllocator, nil) + + return &cmv1.Issuer{ + ObjectMeta: metav1.ObjectMeta{ + Name: name, + Namespace: params.TargetAllocator.Namespace, + Labels: labels, + }, + Spec: cmv1.IssuerSpec{ + IssuerConfig: cmv1.IssuerConfig{ + CA: &cmv1.CAIssuer{ + SecretName: naming.CACertificate(params.TargetAllocator.Name), + }, + }, + }, + } +} diff --git a/internal/manifests/targetallocator/issuer_test.go b/internal/manifests/targetallocator/issuer_test.go new file mode 100644 index 0000000000..d5d0c1d021 --- /dev/null +++ b/internal/manifests/targetallocator/issuer_test.go @@ -0,0 +1,113 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package targetallocator + +import ( + "testing" + + "github.com/stretchr/testify/assert" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" + "github.com/open-telemetry/opentelemetry-operator/internal/config" +) + +type SelfSignedIssuerConfig struct { + Name string + Namespace string + Labels map[string]string +} + +type CAIssuerConfig struct { + Name string + Namespace string + Labels map[string]string + SecretName string +} + +func TestSelfSignedIssuer(t *testing.T) { + taSpec := v1alpha1.TargetAllocatorSpec{} + ta := v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "my-instance", + Namespace: "my-namespace", + }, + Spec: taSpec, + } + + cfg := config.New() + + expected := SelfSignedIssuerConfig{ + Name: "my-instance-self-signed-issuer", + Namespace: "my-namespace", + Labels: map[string]string{ + "app.kubernetes.io/name": "my-instance-self-signed-issuer", + "app.kubernetes.io/instance": "my-namespace.my-instance", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/version": "latest", + }, + } + + params := Params{ + Config: cfg, + TargetAllocator: ta, + } + + issuer := SelfSignedIssuer(params) + + assert.Equal(t, expected.Name, issuer.Name) + assert.Equal(t, expected.Namespace, issuer.Namespace) + assert.Equal(t, expected.Labels, issuer.Labels) + assert.NotNil(t, issuer.Spec.SelfSigned) +} + +func TestCAIssuer(t *testing.T) { + ta := v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "my-instance", + Namespace: "my-namespace", + }, + } + + cfg := config.New() + + expected := CAIssuerConfig{ + Name: "my-instance-ca-issuer", + Namespace: "my-namespace", + Labels: map[string]string{ + "app.kubernetes.io/name": "my-instance-ca-issuer", + "app.kubernetes.io/instance": "my-namespace.my-instance", + "app.kubernetes.io/managed-by": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry", + "app.kubernetes.io/component": "opentelemetry-targetallocator", + "app.kubernetes.io/version": "latest", + }, + SecretName: "my-instance-ca-cert", + } + + params := Params{ + Config: cfg, + TargetAllocator: ta, + } + + issuer := CAIssuer(params) + + assert.Equal(t, expected.Name, issuer.Name) + assert.Equal(t, expected.Namespace, issuer.Namespace) + assert.Equal(t, expected.Labels, issuer.Labels) + assert.Equal(t, expected.SecretName, issuer.Spec.CA.SecretName) +} diff --git a/internal/manifests/targetallocator/service.go b/internal/manifests/targetallocator/service.go index 9577a43290..b372cd97a2 100644 --- a/internal/manifests/targetallocator/service.go +++ b/internal/manifests/targetallocator/service.go @@ -19,8 +19,10 @@ import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/util/intstr" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) func Service(params Params) *corev1.Service { @@ -28,6 +30,19 @@ func Service(params Params) *corev1.Service { labels := manifestutils.Labels(params.TargetAllocator.ObjectMeta, name, params.TargetAllocator.Spec.Image, ComponentOpenTelemetryTargetAllocator, nil) selector := manifestutils.TASelectorLabels(params.TargetAllocator, ComponentOpenTelemetryTargetAllocator) + ports := make([]corev1.ServicePort, 0) + ports = append(ports, corev1.ServicePort{ + Name: "targetallocation", + Port: 80, + TargetPort: intstr.FromString("http")}) + + if params.Config.CertManagerAvailability() == certmanager.Available && featuregate.EnableTargetAllocatorMTLS.IsEnabled() { + ports = append(ports, corev1.ServicePort{ + Name: "targetallocation-https", + Port: 443, + TargetPort: intstr.FromString("https")}) + } + return &corev1.Service{ ObjectMeta: metav1.ObjectMeta{ Name: naming.TAService(params.TargetAllocator.Name), @@ -35,12 +50,8 @@ func Service(params Params) *corev1.Service { Labels: labels, }, Spec: corev1.ServiceSpec{ - Selector: selector, - Ports: []corev1.ServicePort{{ - Name: "targetallocation", - Port: 80, - TargetPort: intstr.FromString("http"), - }}, + Selector: selector, + Ports: ports, IPFamilies: params.TargetAllocator.Spec.IpFamilies, IPFamilyPolicy: params.TargetAllocator.Spec.IpFamilyPolicy, }, diff --git a/internal/manifests/targetallocator/service_test.go b/internal/manifests/targetallocator/service_test.go index f21e0fe5d6..2c0aead766 100644 --- a/internal/manifests/targetallocator/service_test.go +++ b/internal/manifests/targetallocator/service_test.go @@ -18,10 +18,14 @@ import ( "testing" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + colfg "go.opentelemetry.io/collector/featuregate" v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/util/intstr" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) func TestServicePorts(t *testing.T) { @@ -42,3 +46,32 @@ func TestServicePorts(t *testing.T) { assert.Equal(t, ports[0].Port, s.Spec.Ports[0].Port) assert.Equal(t, ports[0].TargetPort, s.Spec.Ports[0].TargetPort) } + +func TestServicePortsWithTargetAllocatorMTLS(t *testing.T) { + targetAllocator := targetAllocatorInstance() + cfg := config.New(config.WithCertManagerAvailability(certmanager.Available)) + + flgs := featuregate.Flags(colfg.GlobalRegistry()) + err := flgs.Parse([]string{"--feature-gates=operator.targetallocator.mtls"}) + require.NoError(t, err) + + params := Params{ + TargetAllocator: targetAllocator, + Config: cfg, + Log: logger, + } + + ports := []v1.ServicePort{ + {Name: "targetallocation", Port: 80, TargetPort: intstr.FromString("http")}, + {Name: "targetallocation-https", Port: 443, TargetPort: intstr.FromString("https")}, + } + + s := Service(params) + + assert.Equal(t, ports[0].Name, s.Spec.Ports[0].Name) + assert.Equal(t, ports[0].Port, s.Spec.Ports[0].Port) + assert.Equal(t, ports[0].TargetPort, s.Spec.Ports[0].TargetPort) + assert.Equal(t, ports[1].Name, s.Spec.Ports[1].Name) + assert.Equal(t, ports[1].Port, s.Spec.Ports[1].Port) + assert.Equal(t, ports[1].TargetPort, s.Spec.Ports[1].TargetPort) +} diff --git a/internal/manifests/targetallocator/targetallocator.go b/internal/manifests/targetallocator/targetallocator.go index e1da206f4f..21b00eebc8 100644 --- a/internal/manifests/targetallocator/targetallocator.go +++ b/internal/manifests/targetallocator/targetallocator.go @@ -22,6 +22,7 @@ import ( "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/manifests" "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" @@ -46,6 +47,16 @@ func Build(params Params) ([]client.Object, error) { resourceFactories = append(resourceFactories, manifests.FactoryWithoutError(ServiceMonitor)) } + if params.Config.CertManagerAvailability() == certmanager.Available && featuregate.EnableTargetAllocatorMTLS.IsEnabled() { + resourceFactories = append(resourceFactories, + manifests.FactoryWithoutError(SelfSignedIssuer), + manifests.FactoryWithoutError(CACertificate), + manifests.FactoryWithoutError(CAIssuer), + manifests.FactoryWithoutError(ServingCertificate), + manifests.FactoryWithoutError(ClientCertificate), + ) + } + for _, factory := range resourceFactories { res, err := factory(params) if err != nil { diff --git a/internal/manifests/targetallocator/volume.go b/internal/manifests/targetallocator/volume.go index 2da3a961b2..c78b736254 100644 --- a/internal/manifests/targetallocator/volume.go +++ b/internal/manifests/targetallocator/volume.go @@ -18,8 +18,10 @@ import ( corev1 "k8s.io/api/core/v1" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) // Volumes builds the volumes for the given instance, including the config map volume. @@ -38,5 +40,16 @@ func Volumes(cfg config.Config, instance v1alpha1.TargetAllocator) []corev1.Volu }, }} + if cfg.CertManagerAvailability() == certmanager.Available && featuregate.EnableTargetAllocatorMTLS.IsEnabled() { + volumes = append(volumes, corev1.Volume{ + Name: naming.TAServerCertificate(instance.Name), + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: naming.TAServerCertificateSecretName(instance.Name), + }, + }, + }) + } + return volumes } diff --git a/internal/manifests/targetallocator/volume_test.go b/internal/manifests/targetallocator/volume_test.go index 6d255e849c..898f900924 100644 --- a/internal/manifests/targetallocator/volume_test.go +++ b/internal/manifests/targetallocator/volume_test.go @@ -18,10 +18,16 @@ import ( "testing" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + colfg "go.opentelemetry.io/collector/featuregate" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) func TestVolumeNewDefault(t *testing.T) { @@ -41,3 +47,58 @@ func TestVolumeNewDefault(t *testing.T) { // check that it's the ta-internal volume, with the config map assert.Equal(t, naming.TAConfigMapVolume(), volumes[0].Name) } + +func TestVolumeWithTargetAllocatorMTLS(t *testing.T) { + t.Run("CertManager available and EnableTargetAllocatorMTLS enabled", func(t *testing.T) { + ta := v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + }, + } + cfg := config.New(config.WithCertManagerAvailability(certmanager.Available)) + + flgs := featuregate.Flags(colfg.GlobalRegistry()) + err := flgs.Parse([]string{"--feature-gates=operator.targetallocator.mtls"}) + require.NoError(t, err) + + volumes := Volumes(cfg, ta) + + expectedVolume := corev1.Volume{ + Name: naming.TAServerCertificate(ta.Name), + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: naming.TAServerCertificateSecretName(ta.Name), + }, + }, + } + assert.Contains(t, volumes, expectedVolume) + }) + + t.Run("CertManager not available", func(t *testing.T) { + ta := v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + }, + } + cfg := config.New(config.WithCertManagerAvailability(certmanager.NotAvailable)) + + flgs := featuregate.Flags(colfg.GlobalRegistry()) + err := flgs.Parse([]string{"--feature-gates=operator.targetallocator.mtls"}) + require.NoError(t, err) + + volumes := Volumes(cfg, ta) + assert.NotContains(t, volumes, corev1.Volume{Name: naming.TAServerCertificate(ta.Name)}) + }) + + t.Run("EnableTargetAllocatorMTLS disabled", func(t *testing.T) { + ta := v1alpha1.TargetAllocator{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-targetallocator", + }, + } + cfg := config.New(config.WithCertManagerAvailability(certmanager.Available)) + + volumes := Volumes(cfg, ta) + assert.NotContains(t, volumes, corev1.Volume{Name: naming.TAServerCertificate(ta.Name)}) + }) +} diff --git a/internal/naming/main.go b/internal/naming/main.go index f4c6dc3389..149a9f9d5a 100644 --- a/internal/naming/main.go +++ b/internal/naming/main.go @@ -116,6 +116,11 @@ func MonitoringService(otelcol string) string { return DNSName(Truncate("%s-monitoring", 63, Service(otelcol))) } +// ExtensionService builds the name for the extension service based on the instance. +func ExtensionService(otelcol string) string { + return DNSName(Truncate("%s-extension", 63, Service(otelcol))) +} + // Service builds the service name based on the instance. func Service(otelcol string) string { return DNSName(Truncate("%s-collector", 63, otelcol)) @@ -180,3 +185,38 @@ func TargetAllocatorServiceMonitor(otelcol string) string { func OpAMPBridgeServiceAccount(opampBridge string) string { return DNSName(Truncate("%s-opamp-bridge", 63, opampBridge)) } + +// SelfSignedIssuer returns the SelfSigned Issuer name based on the instance. +func SelfSignedIssuer(otelcol string) string { + return DNSName(Truncate("%s-self-signed-issuer", 63, otelcol)) +} + +// CAIssuer returns the CA Issuer name based on the instance. +func CAIssuer(otelcol string) string { + return DNSName(Truncate("%s-ca-issuer", 63, otelcol)) +} + +// CACertificateSecret returns the Secret name based on the instance. +func CACertificate(otelcol string) string { + return DNSName(Truncate("%s-ca-cert", 63, otelcol)) +} + +// TAServerCertificate returns the Certificate name based on the instance. +func TAServerCertificate(otelcol string) string { + return DNSName(Truncate("%s-ta-server-cert", 63, otelcol)) +} + +// TAServerCertificateSecretName returns the Secret name based on the instance. +func TAServerCertificateSecretName(otelcol string) string { + return DNSName(Truncate("%s-ta-server-cert", 63, otelcol)) +} + +// TAClientCertificate returns the Certificate name based on the instance. +func TAClientCertificate(otelcol string) string { + return DNSName(Truncate("%s-ta-client-cert", 63, otelcol)) +} + +// TAClientCertificateSecretName returns the Secret name based on the instance. +func TAClientCertificateSecretName(otelcol string) string { + return DNSName(Truncate("%s-ta-client-cert", 63, otelcol)) +} diff --git a/internal/operator-metrics/metrics.go b/internal/operator-metrics/metrics.go new file mode 100644 index 0000000000..dd95e16e7e --- /dev/null +++ b/internal/operator-metrics/metrics.go @@ -0,0 +1,197 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package operatormetrics + +import ( + "context" + "fmt" + "os" + + "github.com/go-logr/logr" + monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/util/intstr" + "k8s.io/client-go/rest" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" +) + +var ( + // namespaceFile is the path to the namespace file for the service account. + namespaceFile = "/var/run/secrets/kubernetes.io/serviceaccount/namespace" + + // caBundleConfigMap declares the name of the config map for the CA bundle. + caBundleConfigMap = "serving-certs-ca-bundle" + + // prometheusCAFile declares the path for prometheus CA file for service monitors in OpenShift. + prometheusCAFile = fmt.Sprintf("/etc/prometheus/configmaps/%s/service-ca.crt", caBundleConfigMap) + + // nolint #nosec + // bearerTokenFile declares the path for bearer token file for service monitors. + bearerTokenFile = "/var/run/secrets/kubernetes.io/serviceaccount/token" + + // openshiftInClusterMonitoringNamespace declares the namespace for the OpenShift in-cluster monitoring. + openshiftInClusterMonitoringNamespace = "openshift-monitoring" +) + +var _ manager.Runnable = &OperatorMetrics{} + +type OperatorMetrics struct { + kubeClient client.Client + log logr.Logger +} + +func NewOperatorMetrics(config *rest.Config, scheme *runtime.Scheme, log logr.Logger) (OperatorMetrics, error) { + kubeClient, err := client.New(config, client.Options{Scheme: scheme}) + if err != nil { + return OperatorMetrics{}, err + } + + return OperatorMetrics{ + kubeClient: kubeClient, + log: log, + }, nil +} + +func (om OperatorMetrics) Start(ctx context.Context) error { + err := om.createOperatorMetricsServiceMonitor(ctx) + if err != nil { + om.log.Error(err, "error creating Service Monitor for operator metrics") + } + + return nil +} + +func (om OperatorMetrics) NeedLeaderElection() bool { + return true +} + +func (om OperatorMetrics) caConfigMapExists() bool { + return om.kubeClient.Get(context.Background(), client.ObjectKey{ + Name: caBundleConfigMap, + Namespace: openshiftInClusterMonitoringNamespace, + }, &corev1.ConfigMap{}, + ) == nil +} + +func (om OperatorMetrics) getOwnerReferences(ctx context.Context, namespace string) (metav1.OwnerReference, error) { + var deploymentList appsv1.DeploymentList + + listOptions := []client.ListOption{ + client.InNamespace(namespace), + client.MatchingLabels(map[string]string{ + "app.kubernetes.io/name": "opentelemetry-operator", + "control-plane": "controller-manager", + }), + } + + err := om.kubeClient.List(ctx, &deploymentList, listOptions...) + if err != nil { + return metav1.OwnerReference{}, err + } + + if len(deploymentList.Items) == 0 { + return metav1.OwnerReference{}, fmt.Errorf("no deployments found with the specified label") + } + deployment := &deploymentList.Items[0] + + ownerRef := metav1.OwnerReference{ + APIVersion: "apps/v1", + Kind: "Deployment", + Name: deployment.Name, + UID: deployment.UID, + } + + return ownerRef, nil +} + +func (om OperatorMetrics) createOperatorMetricsServiceMonitor(ctx context.Context) error { + rawNamespace, err := os.ReadFile(namespaceFile) + if err != nil { + return fmt.Errorf("error reading namespace file: %w", err) + } + namespace := string(rawNamespace) + + ownerRef, err := om.getOwnerReferences(ctx, namespace) + if err != nil { + return fmt.Errorf("error getting owner references: %w", err) + } + + var tlsConfig *monitoringv1.TLSConfig + + if om.caConfigMapExists() { + serviceName := fmt.Sprintf("opentelemetry-operator-controller-manager-metrics-service.%s.svc", namespace) + + tlsConfig = &monitoringv1.TLSConfig{ + CAFile: prometheusCAFile, + SafeTLSConfig: monitoringv1.SafeTLSConfig{ + ServerName: &serviceName, + }, + } + } else { + t := true + tlsConfig = &monitoringv1.TLSConfig{ + SafeTLSConfig: monitoringv1.SafeTLSConfig{ + // kube-rbac-proxy uses a self-signed cert by default + InsecureSkipVerify: &t, + }, + } + } + + sm := monitoringv1.ServiceMonitor{ + ObjectMeta: metav1.ObjectMeta{ + Name: "opentelemetry-operator-metrics-monitor", + Namespace: namespace, + Labels: map[string]string{ + "app.kubernetes.io/name": "opentelemetry-operator", + "app.kubernetes.io/part-of": "opentelemetry-operator", + "control-plane": "controller-manager", + }, + OwnerReferences: []metav1.OwnerReference{ownerRef}, + }, + Spec: monitoringv1.ServiceMonitorSpec{ + Selector: metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/name": "opentelemetry-operator", + }, + }, + Endpoints: []monitoringv1.Endpoint{ + { + BearerTokenFile: bearerTokenFile, + Interval: "30s", + Path: "/metrics", + Scheme: "https", + ScrapeTimeout: "10s", + TargetPort: &intstr.IntOrString{IntVal: 8443}, + TLSConfig: tlsConfig, + }, + }, + }, + } + + err = om.kubeClient.Create(ctx, &sm) + // The ServiceMonitor can be already there if this is a restart + if err != nil && !apierrors.IsAlreadyExists(err) { + return err + } + + <-ctx.Done() + + return om.kubeClient.Delete(ctx, &sm) +} diff --git a/internal/operator-metrics/metrics_test.go b/internal/operator-metrics/metrics_test.go new file mode 100644 index 0000000000..a0293fa2e5 --- /dev/null +++ b/internal/operator-metrics/metrics_test.go @@ -0,0 +1,201 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package operatormetrics + +import ( + "context" + "os" + "reflect" + "testing" + "time" + + "github.com/go-logr/logr" + monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/wait" + "k8s.io/client-go/rest" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" +) + +func TestNewOperatorMetrics(t *testing.T) { + config := &rest.Config{} + scheme := runtime.NewScheme() + metrics, err := NewOperatorMetrics(config, scheme, logr.Discard()) + assert.NoError(t, err) + assert.NotNil(t, metrics.kubeClient) +} + +func TestOperatorMetrics_Start(t *testing.T) { + tmpFile, err := os.CreateTemp("", "namespace") + require.NoError(t, err) + defer os.Remove(tmpFile.Name()) + + _, err = tmpFile.WriteString("test-namespace") + require.NoError(t, err) + tmpFile.Close() + + namespaceFile = tmpFile.Name() + + scheme := runtime.NewScheme() + require.NoError(t, corev1.AddToScheme(scheme)) + require.NoError(t, appsv1.AddToScheme(scheme)) + require.NoError(t, monitoringv1.AddToScheme(scheme)) + + client := fake.NewClientBuilder().WithScheme(scheme).WithRuntimeObjects( + &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{Name: "opentelemetry-operator", Namespace: "test-namespace", Labels: map[string]string{"app.kubernetes.io/name": "opentelemetry-operator", "control-plane": "controller-manager"}}, + }, + ).Build() + + metrics := OperatorMetrics{kubeClient: client} + + ctx, cancel := context.WithCancel(context.Background()) + errChan := make(chan error) + go func() { + errChan <- metrics.Start(ctx) + }() + + ctxTimeout, cancelTimeout := context.WithTimeout(ctx, time.Second*10) + defer cancelTimeout() + + // Wait until one service monitor is being created + var serviceMonitor *monitoringv1.ServiceMonitor = &monitoringv1.ServiceMonitor{} + err = wait.PollUntilContextTimeout( + ctxTimeout, + time.Millisecond*100, + time.Second*10, + true, + func(ctx context.Context) (bool, error) { + errGet := client.Get(ctx, types.NamespacedName{Name: "opentelemetry-operator-metrics-monitor", Namespace: "test-namespace"}, serviceMonitor) + + if errGet != nil { + if apierrors.IsNotFound(errGet) { + return false, nil + } + return false, err + } + return true, nil + }, + ) + require.NoError(t, err) + + cancel() + err = <-errChan + assert.NoError(t, err) +} + +func TestOperatorMetrics_NeedLeaderElection(t *testing.T) { + metrics := OperatorMetrics{} + assert.True(t, metrics.NeedLeaderElection()) +} + +func TestOperatorMetrics_caConfigMapExists(t *testing.T) { + scheme := runtime.NewScheme() + err := corev1.AddToScheme(scheme) + require.NoError(t, err) + + client := fake.NewClientBuilder().WithScheme(scheme).WithObjects( + &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: caBundleConfigMap, + Namespace: openshiftInClusterMonitoringNamespace, + }, + }, + ).Build() + + metrics := OperatorMetrics{kubeClient: client} + + assert.True(t, metrics.caConfigMapExists()) + + // Test when the ConfigMap doesn't exist + clientWithoutConfigMap := fake.NewClientBuilder().WithScheme(scheme).Build() + metricsWithoutConfigMap := OperatorMetrics{kubeClient: clientWithoutConfigMap} + assert.False(t, metricsWithoutConfigMap.caConfigMapExists()) +} + +func TestOperatorMetrics_getOwnerReferences(t *testing.T) { + tests := []struct { + name string + namespace string + objects []client.Object + want metav1.OwnerReference + wantErr bool + }{ + { + name: "successful owner reference retrieval", + namespace: "test-namespace", + objects: []client.Object{ + &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "opentelemetry-operator", + Namespace: "test-namespace", + UID: "test-uid", + Labels: map[string]string{ + "app.kubernetes.io/name": "opentelemetry-operator", + "control-plane": "controller-manager", + }, + }, + }, + }, + want: metav1.OwnerReference{ + APIVersion: "apps/v1", + Kind: "Deployment", + Name: "opentelemetry-operator", + UID: "test-uid", + }, + wantErr: false, + }, + { + name: "no deployments found", + namespace: "test-namespace", + objects: []client.Object{}, + want: metav1.OwnerReference{}, + wantErr: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + scheme := runtime.NewScheme() + _ = appsv1.AddToScheme(scheme) + fakeClient := fake.NewClientBuilder(). + WithScheme(scheme). + WithObjects(tt.objects...). + Build() + + om := OperatorMetrics{ + kubeClient: fakeClient, + log: logr.Discard(), + } + + got, err := om.getOwnerReferences(context.Background(), tt.namespace) + if (err != nil) != tt.wantErr { + t.Errorf("getOwnerReferences() error = %v, wantErr %v", err, tt.wantErr) + return + } + if !reflect.DeepEqual(got, tt.want) { + t.Errorf("getOwnerReferences() got = %v, want %v", got, tt.want) + } + }) + } +} diff --git a/internal/rbac/access.go b/internal/rbac/access.go index 5bdc9b27cf..ab34bc7485 100644 --- a/internal/rbac/access.go +++ b/internal/rbac/access.go @@ -29,6 +29,13 @@ const ( serviceAccountFmtStr = "system:serviceaccount:%s:%s" ) +type SAReviewer interface { + CheckPolicyRules(ctx context.Context, serviceAccount, serviceAccountNamespace string, rules ...*rbacv1.PolicyRule) ([]*v1.SubjectAccessReview, error) + CanAccess(ctx context.Context, serviceAccount, serviceAccountNamespace string, res *v1.ResourceAttributes, nonResourceAttributes *v1.NonResourceAttributes) (*v1.SubjectAccessReview, error) +} + +var _ SAReviewer = &Reviewer{} + type Reviewer struct { client kubernetes.Interface } diff --git a/internal/rbac/format.go b/internal/rbac/format.go index 784bcc39c2..da0a4ca1c2 100644 --- a/internal/rbac/format.go +++ b/internal/rbac/format.go @@ -23,22 +23,27 @@ import ( // WarningsGroupedByResource is a helper to take the missing permissions and format them as warnings. func WarningsGroupedByResource(reviews []*v1.SubjectAccessReview) []string { - fullResourceToVerbs := make(map[string][]string) + userFullResourceToVerbs := make(map[string]map[string][]string) for _, review := range reviews { + if _, ok := userFullResourceToVerbs[review.Spec.User]; !ok { + userFullResourceToVerbs[review.Spec.User] = make(map[string][]string) + } if review.Spec.ResourceAttributes != nil { key := fmt.Sprintf("%s/%s", review.Spec.ResourceAttributes.Group, review.Spec.ResourceAttributes.Resource) if len(review.Spec.ResourceAttributes.Group) == 0 { key = review.Spec.ResourceAttributes.Resource } - fullResourceToVerbs[key] = append(fullResourceToVerbs[key], review.Spec.ResourceAttributes.Verb) + userFullResourceToVerbs[review.Spec.User][key] = append(userFullResourceToVerbs[review.Spec.User][key], review.Spec.ResourceAttributes.Verb) } else if review.Spec.NonResourceAttributes != nil { key := fmt.Sprintf("nonResourceURL: %s", review.Spec.NonResourceAttributes.Path) - fullResourceToVerbs[key] = append(fullResourceToVerbs[key], review.Spec.NonResourceAttributes.Verb) + userFullResourceToVerbs[review.Spec.User][key] = append(userFullResourceToVerbs[review.Spec.User][key], review.Spec.NonResourceAttributes.Verb) } } var warnings []string - for fullResource, verbs := range fullResourceToVerbs { - warnings = append(warnings, fmt.Sprintf("missing the following rules for %s: [%s]", fullResource, strings.Join(verbs, ","))) + for user, fullResourceToVerbs := range userFullResourceToVerbs { + for fullResource, verbs := range fullResourceToVerbs { + warnings = append(warnings, fmt.Sprintf("missing the following rules for %s - %s: [%s]", user, fullResource, strings.Join(verbs, ","))) + } } return warnings } diff --git a/internal/rbac/format_test.go b/internal/rbac/format_test.go index 82f97d25fe..8c08464c40 100644 --- a/internal/rbac/format_test.go +++ b/internal/rbac/format_test.go @@ -37,6 +37,7 @@ func TestWarningsGroupedByResource(t *testing.T) { reviews: []*v1.SubjectAccessReview{ { Spec: v1.SubjectAccessReviewSpec{ + User: "system:serviceaccount:test-ns:test-targetallocator", ResourceAttributes: &v1.ResourceAttributes{ Verb: "get", Group: "", @@ -45,13 +46,14 @@ func TestWarningsGroupedByResource(t *testing.T) { }, }, }, - expected: []string{"missing the following rules for namespaces: [get]"}, + expected: []string{"missing the following rules for system:serviceaccount:test-ns:test-targetallocator - namespaces: [get]"}, }, { desc: "One access review with non resource attributes", reviews: []*v1.SubjectAccessReview{ { Spec: v1.SubjectAccessReviewSpec{ + User: "system:serviceaccount:test-ns:test-targetallocator", ResourceAttributes: &v1.ResourceAttributes{ Verb: "watch", Group: "apps", @@ -60,7 +62,7 @@ func TestWarningsGroupedByResource(t *testing.T) { }, }, }, - expected: []string{"missing the following rules for apps/replicasets: [watch]"}, + expected: []string{"missing the following rules for system:serviceaccount:test-ns:test-targetallocator - apps/replicasets: [watch]"}, }, } diff --git a/internal/webhook/podmutation/webhookhandler.go b/internal/webhook/podmutation/webhookhandler.go index b4ad5fa7fc..a0704c5ad9 100644 --- a/internal/webhook/podmutation/webhookhandler.go +++ b/internal/webhook/podmutation/webhookhandler.go @@ -30,7 +30,7 @@ import ( ) // +kubebuilder:webhook:path=/mutate-v1-pod,mutating=true,failurePolicy=ignore,groups="",resources=pods,verbs=create,versions=v1,name=mpod.kb.io,sideEffects=none,admissionReviewVersions=v1 -// +kubebuilder:rbac:groups="",resources=namespaces,verbs=list;watch +// +kubebuilder:rbac:groups="",resources=namespaces;secrets,verbs=get;list;watch // +kubebuilder:rbac:groups=opentelemetry.io,resources=opentelemetrycollectors,verbs=get;list;watch // +kubebuilder:rbac:groups=opentelemetry.io,resources=instrumentations,verbs=get;list;watch // +kubebuilder:rbac:groups="apps",resources=replicasets,verbs=get;list;watch diff --git a/main.go b/main.go index 1d3471898f..7c82ce103d 100644 --- a/main.go +++ b/main.go @@ -25,6 +25,7 @@ import ( "strings" "time" + cmv1 "github.com/cert-manager/cert-manager/pkg/apis/certmanager/v1" routev1 "github.com/openshift/api/route/v1" monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" "github.com/spf13/pflag" @@ -50,12 +51,14 @@ import ( otelv1beta1 "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" "github.com/open-telemetry/opentelemetry-operator/controllers" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect" + "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift" "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/fips" collectorManifests "github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector" openshiftDashboards "github.com/open-telemetry/opentelemetry-operator/internal/openshift/dashboards" + operatormetrics "github.com/open-telemetry/opentelemetry-operator/internal/operator-metrics" "github.com/open-telemetry/opentelemetry-operator/internal/rbac" "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/internal/webhook/podmutation" @@ -349,7 +352,16 @@ func main() { } else { setupLog.Info("Openshift CRDs are not installed, skipping adding to scheme.") } + if cfg.CertManagerAvailability() == certmanager.Available { + setupLog.Info("Cert-Manager is available to the operator, adding to scheme.") + utilruntime.Must(cmv1.AddToScheme(scheme)) + if featuregate.EnableTargetAllocatorMTLS.IsEnabled() { + setupLog.Info("Securing the connection between the target allocator and the collector") + } + } else { + setupLog.Info("Cert-Manager is not available to the operator, skipping adding to scheme.") + } if cfg.AnnotationsFilter() != nil { for _, basePattern := range cfg.AnnotationsFilter() { _, compileErr := regexp.Compile(basePattern) @@ -380,6 +392,7 @@ func main() { Scheme: mgr.GetScheme(), Config: cfg, Recorder: mgr.GetEventRecorderFor("opentelemetry-operator"), + Reviewer: reviewer, }) if err = collectorReconciler.SetupWithManager(mgr); err != nil { @@ -387,17 +400,18 @@ func main() { os.Exit(1) } - // TODO: Uncomment the line below to enable the Target Allocator controller - //if err = controllers.NewTargetAllocatorReconciler( - // mgr.GetClient(), - // mgr.GetScheme(), - // mgr.GetEventRecorderFor("targetallocator"), - // cfg, - // ctrl.Log.WithName("controllers").WithName("TargetAllocator"), - //).SetupWithManager(mgr); err != nil { - // setupLog.Error(err, "unable to create controller", "controller", "TargetAllocator") - // os.Exit(1) - //} + if featuregate.CollectorUsesTargetAllocatorCR.IsEnabled() { + if err = controllers.NewTargetAllocatorReconciler( + mgr.GetClient(), + mgr.GetScheme(), + mgr.GetEventRecorderFor("targetallocator"), + cfg, + ctrl.Log.WithName("controllers").WithName("TargetAllocator"), + ).SetupWithManager(mgr); err != nil { + setupLog.Error(err, "unable to create controller", "controller", "TargetAllocator") + os.Exit(1) + } + } if err = controllers.NewOpAMPBridgeReconciler(controllers.OpAMPBridgeReconcilerParams{ Client: mgr.GetClient(), @@ -410,6 +424,17 @@ func main() { os.Exit(1) } + if cfg.PrometheusCRAvailability() == prometheus.Available { + operatorMetrics, opError := operatormetrics.NewOperatorMetrics(mgr.GetConfig(), scheme, ctrl.Log.WithName("operator-metrics-sm")) + if opError != nil { + setupLog.Error(opError, "Failed to create the operator metrics SM") + } + err = mgr.Add(operatorMetrics) + if err != nil { + setupLog.Error(err, "Failed to add the operator metrics SM") + } + } + if os.Getenv("ENABLE_WEBHOOKS") != "false" { var crdMetrics *otelv1beta1.Metrics @@ -423,16 +448,17 @@ func main() { if err != nil { setupLog.Error(err, "Error init CRD metrics") } - } - bv := func(collector otelv1beta1.OpenTelemetryCollector) admission.Warnings { + bv := func(ctx context.Context, collector otelv1beta1.OpenTelemetryCollector) admission.Warnings { var warnings admission.Warnings - params, newErr := collectorReconciler.GetParams(collector) + params, newErr := collectorReconciler.GetParams(ctx, collector) if err != nil { warnings = append(warnings, newErr.Error()) return warnings } + + params.ErrorAsWarning = true _, newErr = collectorManifests.Build(params) if newErr != nil { warnings = append(warnings, newErr.Error()) @@ -451,11 +477,12 @@ func main() { setupLog.Error(err, "unable to create webhook", "webhook", "OpenTelemetryCollector") os.Exit(1) } - // TODO: Uncomment the line below to enable the Target Allocator webhook - //if err = otelv1alpha1.SetupTargetAllocatorWebhook(mgr, cfg, reviewer); err != nil { - // setupLog.Error(err, "unable to create webhook", "webhook", "TargetAllocator") - // os.Exit(1) - //} + if featuregate.CollectorUsesTargetAllocatorCR.IsEnabled() { + if err = otelv1alpha1.SetupTargetAllocatorWebhook(mgr, cfg, reviewer); err != nil { + setupLog.Error(err, "unable to create webhook", "webhook", "TargetAllocator") + os.Exit(1) + } + } if err = otelv1alpha1.SetupInstrumentationWebhook(mgr, cfg); err != nil { setupLog.Error(err, "unable to create webhook", "webhook", "Instrumentation") os.Exit(1) diff --git a/pkg/collector/upgrade/suite_test.go b/pkg/collector/upgrade/suite_test.go index c5e5cdbd23..7a571bc481 100644 --- a/pkg/collector/upgrade/suite_test.go +++ b/pkg/collector/upgrade/suite_test.go @@ -42,6 +42,7 @@ import ( "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/rbac" + "github.com/open-telemetry/opentelemetry-operator/internal/version" ) var ( @@ -160,3 +161,9 @@ func TestMain(m *testing.M) { os.Exit(code) } + +func makeVersion(v string) version.Version { + return version.Version{ + OpenTelemetryCollector: v, + } +} diff --git a/pkg/collector/upgrade/upgrade.go b/pkg/collector/upgrade/upgrade.go index 9a1f426747..610fca140f 100644 --- a/pkg/collector/upgrade/upgrade.go +++ b/pkg/collector/upgrade/upgrade.go @@ -39,6 +39,17 @@ type VersionUpgrade struct { const RecordBufferSize int = 100 +func (u VersionUpgrade) semVer() *semver.Version { + if len(u.Version.OpenTelemetryCollector) == 0 { + return &Latest.Version + } + if v, err := semver.NewVersion(u.Version.OpenTelemetryCollector); err != nil { + return &Latest.Version + } else { + return v + } +} + // ManagedInstances finds all the otelcol instances for the current operator and upgrades them, if necessary. func (u VersionUpgrade) ManagedInstances(ctx context.Context) error { u.Log.Info("looking for managed instances to upgrade") @@ -107,9 +118,9 @@ func (u VersionUpgrade) ManagedInstance(_ context.Context, otelcol v1beta1.OpenT } updated := *(otelcol.DeepCopy()) - if instanceV.GreaterThan(&Latest.Version) { + if instanceV.GreaterThan(u.semVer()) { // Update with the latest known version, which is what we have from versions.txt - u.Log.V(4).Info("no upgrade routines are needed for the OpenTelemetry instance", "name", updated.Name, "namespace", updated.Namespace, "version", updated.Status.Version, "latest", Latest.Version.String()) + u.Log.V(4).Info("no upgrade routines are needed for the OpenTelemetry instance", "name", updated.Name, "namespace", updated.Namespace, "version", updated.Status.Version, "latest", u.semVer().String()) otelColV, err := semver.NewVersion(u.Version.OpenTelemetryCollector) if err != nil { @@ -126,6 +137,11 @@ func (u VersionUpgrade) ManagedInstance(_ context.Context, otelcol v1beta1.OpenT } for _, available := range versions { + // Don't run upgrades for versions after the webhook's set version. + // This is important only for testing. + if available.GreaterThan(u.semVer()) { + continue + } if available.GreaterThan(instanceV) { if available.upgrade != nil { otelcolV1alpha1 := &v1alpha1.OpenTelemetryCollector{} diff --git a/pkg/collector/upgrade/upgrade_test.go b/pkg/collector/upgrade/upgrade_test.go index 616b58daa8..49d23af8bb 100644 --- a/pkg/collector/upgrade/upgrade_test.go +++ b/pkg/collector/upgrade/upgrade_test.go @@ -141,7 +141,7 @@ func TestEnvVarUpdates(t *testing.T) { require.Equal(t, collectorInstance.Status.Version, persisted.Status.Version) currentV := version.Get() - currentV.OpenTelemetryCollector = "0.110.0" + currentV.OpenTelemetryCollector = "0.111.0" up := &upgrade.VersionUpgrade{ Log: logger, Version: currentV, diff --git a/pkg/collector/upgrade/v0_104_0_test.go b/pkg/collector/upgrade/v0_104_0_test.go index bdf88e7c8e..0c5b939479 100644 --- a/pkg/collector/upgrade/v0_104_0_test.go +++ b/pkg/collector/upgrade/v0_104_0_test.go @@ -24,7 +24,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -46,7 +45,7 @@ func Test0_104_0Upgrade(t *testing.T) { versionUpgrade := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.104.0"), Client: k8sClient, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } @@ -56,7 +55,9 @@ func Test0_104_0Upgrade(t *testing.T) { t.Errorf("expect err: nil but got: %v", err) } assert.EqualValues(t, - map[string]string{}, + map[string]string{ + "feature-gates": "-component.UseLocalHostAsDefaultHost", + }, col.Spec.Args, "missing featuregate") } diff --git a/pkg/collector/upgrade/v0_105_0_test.go b/pkg/collector/upgrade/v0_105_0_test.go index c92880790d..b0af1cd8ea 100644 --- a/pkg/collector/upgrade/v0_105_0_test.go +++ b/pkg/collector/upgrade/v0_105_0_test.go @@ -23,7 +23,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -59,7 +58,7 @@ func Test0_105_0Upgrade(t *testing.T) { versionUpgrade := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.105.0"), Client: k8sClient, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_110_0_test.go b/pkg/collector/upgrade/v0_110_0_test.go new file mode 100644 index 0000000000..ec63f004c1 --- /dev/null +++ b/pkg/collector/upgrade/v0_110_0_test.go @@ -0,0 +1,66 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package upgrade_test + +import ( + "context" + "testing" + + "github.com/stretchr/testify/assert" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/tools/record" + + "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" +) + +func Test0_110_0Upgrade(t *testing.T) { + collectorInstance := v1beta1.OpenTelemetryCollector{ + TypeMeta: metav1.TypeMeta{ + Kind: "OpenTelemetryCollector", + APIVersion: "v1beta1", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "otel-my-instance", + Namespace: "somewhere", + }, + Status: v1beta1.OpenTelemetryCollectorStatus{ + Version: "0.104.0", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ + Args: map[string]string{ + "foo": "bar", + "feature-gates": "+baz,-component.UseLocalHostAsDefaultHost", + }, + }, + Config: v1beta1.Config{}, + }, + } + + versionUpgrade := &upgrade.VersionUpgrade{ + Log: logger, + Version: makeVersion("0.110.0"), + Client: k8sClient, + Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), + } + + col, err := versionUpgrade.ManagedInstance(context.Background(), collectorInstance) + if err != nil { + t.Errorf("expect err: nil but got: %v", err) + } + assert.EqualValues(t, + map[string]string{"foo": "bar", "feature-gates": "+baz"}, col.Spec.Args) +} diff --git a/pkg/collector/upgrade/v0_111_0.go b/pkg/collector/upgrade/v0_111_0.go new file mode 100644 index 0000000000..5ba22efea0 --- /dev/null +++ b/pkg/collector/upgrade/v0_111_0.go @@ -0,0 +1,23 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package upgrade + +import ( + "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" +) + +func upgrade0_111_0(_ VersionUpgrade, otelcol *v1beta1.OpenTelemetryCollector) (*v1beta1.OpenTelemetryCollector, error) { //nolint:unparam + return otelcol, otelcol.Spec.Config.Service.ApplyDefaults() +} diff --git a/pkg/collector/upgrade/v0_111_0_test.go b/pkg/collector/upgrade/v0_111_0_test.go new file mode 100644 index 0000000000..d8b0907e13 --- /dev/null +++ b/pkg/collector/upgrade/v0_111_0_test.go @@ -0,0 +1,98 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package upgrade_test + +import ( + "context" + "testing" + + "github.com/stretchr/testify/assert" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/tools/record" + + "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" + "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" +) + +func Test0_111_0Upgrade(t *testing.T) { + + defaultCollector := v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "otel-my-instance", + Namespace: "somewhere", + }, + Status: v1beta1.OpenTelemetryCollectorStatus{ + Version: "0.110.0", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{}, + Config: v1beta1.Config{}, + }, + } + + defaultCollectorWithConfig := defaultCollector.DeepCopy() + + defaultCollectorWithConfig.Spec.Config.Service.Telemetry = &v1beta1.AnyConfig{ + Object: map[string]interface{}{ + "metrics": map[string]interface{}{ + "address": "1.2.3.4:8888", + }, + }, + } + + tt := []struct { + name string + input v1beta1.OpenTelemetryCollector + expected v1beta1.OpenTelemetryCollector + }{ + { + name: "telemetry settings exist", + input: *defaultCollectorWithConfig, + expected: *defaultCollectorWithConfig, + }, + { + name: "telemetry settings do not exist", + input: *defaultCollector.DeepCopy(), + expected: func() v1beta1.OpenTelemetryCollector { + col := defaultCollector.DeepCopy() + col.Spec.Config.Service.Telemetry = &v1beta1.AnyConfig{ + Object: map[string]interface{}{ + "metrics": map[string]interface{}{ + "address": "0.0.0.0:8888", + }, + }, + } + return *col + }(), + }, + } + + versionUpgrade := &upgrade.VersionUpgrade{ + Log: logger, + Version: makeVersion("0.111.0"), + Client: k8sClient, + Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), + } + + for _, tc := range tt { + t.Run(tc.name, func(t *testing.T) { + col, err := versionUpgrade.ManagedInstance(context.Background(), tc.input) + if err != nil { + t.Errorf("expect err: nil but got: %v", err) + } + assert.Equal(t, tc.expected.Spec.Config.Service.Telemetry, col.Spec.Config.Service.Telemetry) + }) + } +} diff --git a/pkg/collector/upgrade/v0_15_0_test.go b/pkg/collector/upgrade/v0_15_0_test.go index 0a01a3e847..4063f85829 100644 --- a/pkg/collector/upgrade/v0_15_0_test.go +++ b/pkg/collector/upgrade/v0_15_0_test.go @@ -25,7 +25,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -57,7 +56,7 @@ func TestRemoveMetricsTypeFlags(t *testing.T) { // test up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.15.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_19_0_test.go b/pkg/collector/upgrade/v0_19_0_test.go index 63162e5dce..3c3a5e66d0 100644 --- a/pkg/collector/upgrade/v0_19_0_test.go +++ b/pkg/collector/upgrade/v0_19_0_test.go @@ -26,7 +26,6 @@ import ( "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector/adapters" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -74,7 +73,7 @@ service: // test up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.19.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } @@ -124,7 +123,7 @@ service: // test up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.19.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } @@ -191,7 +190,7 @@ service: // test up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.19.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_24_0_test.go b/pkg/collector/upgrade/v0_24_0_test.go index caf53bdfd2..52669150ff 100644 --- a/pkg/collector/upgrade/v0_24_0_test.go +++ b/pkg/collector/upgrade/v0_24_0_test.go @@ -24,7 +24,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -64,7 +63,7 @@ service: // test up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.24.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_31_0_test.go b/pkg/collector/upgrade/v0_31_0_test.go index 861cdb492d..dd340ed655 100644 --- a/pkg/collector/upgrade/v0_31_0_test.go +++ b/pkg/collector/upgrade/v0_31_0_test.go @@ -24,7 +24,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -63,7 +62,7 @@ service: // test up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.31.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_36_0_test.go b/pkg/collector/upgrade/v0_36_0_test.go index 7695d39c00..346adb424c 100644 --- a/pkg/collector/upgrade/v0_36_0_test.go +++ b/pkg/collector/upgrade/v0_36_0_test.go @@ -24,7 +24,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -79,7 +78,7 @@ service: up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.36.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_38_0_test.go b/pkg/collector/upgrade/v0_38_0_test.go index 26e3d69bbb..efa74d9c6a 100644 --- a/pkg/collector/upgrade/v0_38_0_test.go +++ b/pkg/collector/upgrade/v0_38_0_test.go @@ -24,7 +24,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -72,7 +71,7 @@ service: // EXPECTED: drop logging args and configure logging parameters into config from args up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.38.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_39_0_test.go b/pkg/collector/upgrade/v0_39_0_test.go index 204c576c8a..39ffa04436 100644 --- a/pkg/collector/upgrade/v0_39_0_test.go +++ b/pkg/collector/upgrade/v0_39_0_test.go @@ -24,7 +24,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -74,7 +73,7 @@ service: // drop processors.memory_limiter field 'ballast_size_mib' up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.39.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_41_0_test.go b/pkg/collector/upgrade/v0_41_0_test.go index 01903044a5..f13047eefa 100644 --- a/pkg/collector/upgrade/v0_41_0_test.go +++ b/pkg/collector/upgrade/v0_41_0_test.go @@ -24,7 +24,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -64,7 +63,7 @@ service: // TESTCASE 1: restructure cors for both allowed_origin & allowed_headers up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.41.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_43_0_test.go b/pkg/collector/upgrade/v0_43_0_test.go index 348b1d0b96..957849f941 100644 --- a/pkg/collector/upgrade/v0_43_0_test.go +++ b/pkg/collector/upgrade/v0_43_0_test.go @@ -24,7 +24,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -70,7 +69,7 @@ service: // test up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.43.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_56_0_test.go b/pkg/collector/upgrade/v0_56_0_test.go index 57ced4b07a..fd29c55aed 100644 --- a/pkg/collector/upgrade/v0_56_0_test.go +++ b/pkg/collector/upgrade/v0_56_0_test.go @@ -23,7 +23,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -50,7 +49,7 @@ func Test0_56_0Upgrade(t *testing.T) { collectorInstance.Status.Version = "0.55.0" versionUpgrade := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.56.0"), Client: k8sClient, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_57_2_test.go b/pkg/collector/upgrade/v0_57_2_test.go index a3ca59919c..c43b869591 100644 --- a/pkg/collector/upgrade/v0_57_2_test.go +++ b/pkg/collector/upgrade/v0_57_2_test.go @@ -23,7 +23,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -67,7 +66,7 @@ service: //Test to remove port and change endpoint value. versionUpgrade := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.57.2"), Client: k8sClient, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_61_0_test.go b/pkg/collector/upgrade/v0_61_0_test.go index f702695672..91bfdc447f 100644 --- a/pkg/collector/upgrade/v0_61_0_test.go +++ b/pkg/collector/upgrade/v0_61_0_test.go @@ -23,7 +23,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -72,7 +71,7 @@ func Test0_61_0Upgrade(t *testing.T) { versionUpgrade := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.61.0"), Client: k8sClient, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/v0_9_0_test.go b/pkg/collector/upgrade/v0_9_0_test.go index c428034000..06c5c8cf7f 100644 --- a/pkg/collector/upgrade/v0_9_0_test.go +++ b/pkg/collector/upgrade/v0_9_0_test.go @@ -25,7 +25,6 @@ import ( "k8s.io/client-go/tools/record" "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" - "github.com/open-telemetry/opentelemetry-operator/internal/version" "github.com/open-telemetry/opentelemetry-operator/pkg/collector/upgrade" ) @@ -56,7 +55,7 @@ func TestRemoveConnectionDelay(t *testing.T) { // test up := &upgrade.VersionUpgrade{ Log: logger, - Version: version.Get(), + Version: makeVersion("0.9.0"), Client: nil, Recorder: record.NewFakeRecorder(upgrade.RecordBufferSize), } diff --git a/pkg/collector/upgrade/versions.go b/pkg/collector/upgrade/versions.go index 2d856c4a1f..d493583478 100644 --- a/pkg/collector/upgrade/versions.go +++ b/pkg/collector/upgrade/versions.go @@ -106,6 +106,10 @@ var ( Version: *semver.MustParse("0.110.0"), upgradeV1beta1: upgrade0_110_0, }, + { + Version: *semver.MustParse("0.111.0"), + upgradeV1beta1: upgrade0_111_0, + }, } // Latest represents the latest version that we need to upgrade. This is not necessarily the latest known version. diff --git a/pkg/constants/env.go b/pkg/constants/env.go index ac89f13e6d..27963fb900 100644 --- a/pkg/constants/env.go +++ b/pkg/constants/env.go @@ -15,12 +15,16 @@ package constants const ( - EnvOTELServiceName = "OTEL_SERVICE_NAME" - EnvOTELExporterOTLPEndpoint = "OTEL_EXPORTER_OTLP_ENDPOINT" - EnvOTELResourceAttrs = "OTEL_RESOURCE_ATTRIBUTES" - EnvOTELPropagators = "OTEL_PROPAGATORS" - EnvOTELTracesSampler = "OTEL_TRACES_SAMPLER" - EnvOTELTracesSamplerArg = "OTEL_TRACES_SAMPLER_ARG" + EnvOTELServiceName = "OTEL_SERVICE_NAME" + EnvOTELResourceAttrs = "OTEL_RESOURCE_ATTRIBUTES" + EnvOTELPropagators = "OTEL_PROPAGATORS" + EnvOTELTracesSampler = "OTEL_TRACES_SAMPLER" + EnvOTELTracesSamplerArg = "OTEL_TRACES_SAMPLER_ARG" + + EnvOTELExporterOTLPEndpoint = "OTEL_EXPORTER_OTLP_ENDPOINT" + EnvOTELExporterCertificate = "OTEL_EXPORTER_OTLP_CERTIFICATE" + EnvOTELExporterClientCertificate = "OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE" + EnvOTELExporterClientKey = "OTEL_EXPORTER_OTLP_CLIENT_KEY" InstrumentationPrefix = "instrumentation.opentelemetry.io/" AnnotationDefaultAutoInstrumentationJava = InstrumentationPrefix + "default-auto-instrumentation-java-image" @@ -36,6 +40,7 @@ const ( LabelAppVersion = "app.kubernetes.io/version" LabelAppPartOf = "app.kubernetes.io/part-of" + LabelTargetAllocator = "opentelemetry.io/target-allocator" ResourceAttributeAnnotationPrefix = "resource.opentelemetry.io/" EnvPodName = "OTEL_RESOURCE_ATTRIBUTES_POD_NAME" @@ -52,4 +57,9 @@ const ( FlagNginx = "enable-nginx-instrumentation" FlagNodeJS = "enable-nodejs-instrumentation" FlagJava = "enable-java-instrumentation" + + TACollectorTLSDirPath = "/tls" + TACollectorCAFileName = "ca.crt" + TACollectorTLSKeyFileName = "tls.key" + TACollectorTLSCertFileName = "tls.crt" ) diff --git a/pkg/featuregate/featuregate.go b/pkg/featuregate/featuregate.go index bf83d666ce..e08b0fb0c3 100644 --- a/pkg/featuregate/featuregate.go +++ b/pkg/featuregate/featuregate.go @@ -25,6 +25,26 @@ const ( ) var ( + // CollectorUsesTargetAllocatorCR is the feature gate that enables the OpenTelemetryCollector reconciler to generate + // TargetAllocator CRs instead of generating the manifests for its resources directly. + CollectorUsesTargetAllocatorCR = featuregate.GlobalRegistry().MustRegister( + "operator.collector.targetallocatorcr", + featuregate.StageAlpha, + featuregate.WithRegisterDescription("causes collector reconciliation to create a target allocator CR instead of creating resources directly"), + featuregate.WithRegisterFromVersion("v0.112.0"), + ) + // EnableNativeSidecarContainers is the feature gate that controls whether a + // sidecar should be injected as a native sidecar or the classic way. + // Native sidecar containers have been available since kubernetes v1.28 in + // alpha and v1.29 in beta. + // It needs to be enabled with +featureGate=SidecarContainers. + // See: + // https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features + EnableNativeSidecarContainers = featuregate.GlobalRegistry().MustRegister( + "operator.sidecarcontainers.native", + featuregate.StageAlpha, + featuregate.WithRegisterDescription("controls whether the operator supports sidecar containers as init containers. Should only be enabled on k8s v1.29+"), + ) // PrometheusOperatorIsAvailable is the feature gate that enables features associated to the Prometheus Operator. PrometheusOperatorIsAvailable = featuregate.GlobalRegistry().MustRegister( "operator.observability.prometheus", @@ -40,6 +60,21 @@ var ( featuregate.WithRegisterDescription("enables feature to set GOMEMLIMIT and GOMAXPROCS automatically"), featuregate.WithRegisterFromVersion("v0.100.0"), ) + // EnableTargetAllocatorMTLS is the feature gate that enables mTLS between the target allocator and the collector. + EnableTargetAllocatorMTLS = featuregate.GlobalRegistry().MustRegister( + "operator.targetallocator.mtls", + featuregate.StageAlpha, + featuregate.WithRegisterDescription("enables mTLS between the target allocator and the collector"), + featuregate.WithRegisterFromVersion("v0.111.0"), + ) + // EnableTargetAllocatorFallbackStrategy is the feature gate that enables consistent-hashing as the fallback + // strategy for allocation strategies that might not assign all jobs (per-node). + EnableTargetAllocatorFallbackStrategy = featuregate.GlobalRegistry().MustRegister( + "operator.targetallocator.fallbackstrategy", + featuregate.StageAlpha, + featuregate.WithRegisterDescription("enables fallback allocation strategy for the target allocator"), + featuregate.WithRegisterFromVersion("v0.114.0"), + ) // EnableConfigDefaulting is the feature gate that enables the operator to default the endpoint for known components. EnableConfigDefaulting = featuregate.GlobalRegistry().MustRegister( "operator.collector.default.config", diff --git a/pkg/instrumentation/annotation.go b/pkg/instrumentation/annotation.go index 28ef7bf3d5..c415b22dbf 100644 --- a/pkg/instrumentation/annotation.go +++ b/pkg/instrumentation/annotation.go @@ -30,6 +30,7 @@ const ( annotationInjectNodeJSContainersName = "instrumentation.opentelemetry.io/nodejs-container-names" annotationInjectPython = "instrumentation.opentelemetry.io/inject-python" annotationInjectPythonContainersName = "instrumentation.opentelemetry.io/python-container-names" + annotationPythonPlatform = "instrumentation.opentelemetry.io/otel-python-platform" annotationInjectDotNet = "instrumentation.opentelemetry.io/inject-dotnet" annotationDotNetRuntime = "instrumentation.opentelemetry.io/otel-dotnet-auto-runtime" annotationInjectDotnetContainersName = "instrumentation.opentelemetry.io/dotnet-container-names" diff --git a/pkg/instrumentation/apachehttpd.go b/pkg/instrumentation/apachehttpd.go index 39a9e2d96f..5675023cce 100644 --- a/pkg/instrumentation/apachehttpd.go +++ b/pkg/instrumentation/apachehttpd.go @@ -61,6 +61,8 @@ const ( func injectApacheHttpdagent(_ logr.Logger, apacheSpec v1alpha1.ApacheHttpd, pod corev1.Pod, useLabelsForResourceAttributes bool, index int, otlpEndpoint string, resourceMap map[string]string) corev1.Pod { + volume := instrVolume(apacheSpec.VolumeClaimTemplate, apacheAgentVolume, apacheSpec.VolumeSizeLimit) + // caller checks if there is at least one container container := &pod.Spec.Containers[index] @@ -135,14 +137,7 @@ func injectApacheHttpdagent(_ logr.Logger, apacheSpec v1alpha1.ApacheHttpd, pod // Copy OTEL module to a shared volume if isApacheInitContainerMissing(pod, apacheAgentInitContainerName) { // Inject volume for agent - pod.Spec.Volumes = append(pod.Spec.Volumes, corev1.Volume{ - Name: apacheAgentVolume, - VolumeSource: corev1.VolumeSource{ - EmptyDir: &corev1.EmptyDirVolumeSource{ - SizeLimit: volumeSize(apacheSpec.VolumeSizeLimit), - }, - }}) - + pod.Spec.Volumes = append(pod.Spec.Volumes, volume) pod.Spec.InitContainers = append(pod.Spec.InitContainers, corev1.Container{ Name: apacheAgentInitContainerName, Image: apacheSpec.Image, @@ -157,7 +152,7 @@ func injectApacheHttpdagent(_ logr.Logger, apacheSpec v1alpha1.ApacheHttpd, pod "echo \"$" + apacheAttributesEnvVar + "\" > " + apacheAgentConfDirFull + "/" + apacheAgentConfigFile + " && " + "sed -i 's/" + apacheServiceInstanceId + "/'${" + apacheServiceInstanceIdEnvVar + "}'/g' " + apacheAgentConfDirFull + "/" + apacheAgentConfigFile + " && " + // Include a link to include Apache agent configuration file into httpd.conf - "echo 'Include " + getApacheConfDir(apacheSpec.ConfigPath) + "/" + apacheAgentConfigFile + "' >> " + apacheAgentConfDirFull + "/" + apacheConfigFile, + "echo -e '\nInclude " + getApacheConfDir(apacheSpec.ConfigPath) + "/" + apacheAgentConfigFile + "' >> " + apacheAgentConfDirFull + "/" + apacheConfigFile, }, Env: []corev1.EnvVar{ { diff --git a/pkg/instrumentation/apachehttpd_test.go b/pkg/instrumentation/apachehttpd_test.go index 3a93d7418d..ad9287923a 100644 --- a/pkg/instrumentation/apachehttpd_test.go +++ b/pkg/instrumentation/apachehttpd_test.go @@ -79,7 +79,7 @@ func TestInjectApacheHttpdagent(t *testing.T) { Image: "foo/bar:1", Command: []string{"/bin/sh", "-c"}, Args: []string{ - "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo 'Include /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, + "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo -e '\nInclude /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, Env: []corev1.EnvVar{ { Name: apacheAttributesEnvVar, @@ -172,7 +172,7 @@ func TestInjectApacheHttpdagent(t *testing.T) { Image: "foo/bar:1", Command: []string{"/bin/sh", "-c"}, Args: []string{ - "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo 'Include /opt/customPath/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, + "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo -e '\nInclude /opt/customPath/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, Env: []corev1.EnvVar{ { Name: apacheAttributesEnvVar, @@ -266,7 +266,7 @@ func TestInjectApacheHttpdagent(t *testing.T) { Image: "foo/bar:1", Command: []string{"/bin/sh", "-c"}, Args: []string{ - "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo 'Include /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, + "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo -e '\nInclude /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, Env: []corev1.EnvVar{ { Name: apacheAttributesEnvVar, @@ -365,7 +365,7 @@ func TestInjectApacheHttpdagent(t *testing.T) { Image: "foo/bar:1", Command: []string{"/bin/sh", "-c"}, Args: []string{ - "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo 'Include /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, + "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo -e '\nInclude /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, Env: []corev1.EnvVar{ { Name: apacheAttributesEnvVar, @@ -476,7 +476,7 @@ func TestInjectApacheHttpdagentUnknownNamespace(t *testing.T) { Image: "foo/bar:1", Command: []string{"/bin/sh", "-c"}, Args: []string{ - "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo 'Include /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, + "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo -e '\nInclude /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, Env: []corev1.EnvVar{ { Name: apacheAttributesEnvVar, diff --git a/pkg/instrumentation/dotnet.go b/pkg/instrumentation/dotnet.go index 437e256fc1..74f68744ac 100644 --- a/pkg/instrumentation/dotnet.go +++ b/pkg/instrumentation/dotnet.go @@ -52,6 +52,8 @@ const ( func injectDotNetSDK(dotNetSpec v1alpha1.DotNet, pod corev1.Pod, index int, runtime string) (corev1.Pod, error) { + volume := instrVolume(dotNetSpec.VolumeClaimTemplate, dotnetVolumeName, dotNetSpec.VolumeSizeLimit) + // caller checks if there is at least one container. container := &pod.Spec.Containers[index] @@ -110,27 +112,20 @@ func injectDotNetSDK(dotNetSpec v1alpha1.DotNet, pod corev1.Pod, index int, runt setDotNetEnvVar(container, envDotNetSharedStore, dotNetSharedStorePath, concatEnvValues) container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{ - Name: dotnetVolumeName, + Name: volume.Name, MountPath: dotnetInstrMountPath, }) // We just inject Volumes and init containers for the first processed container. if isInitContainerMissing(pod, dotnetInitContainerName) { - pod.Spec.Volumes = append(pod.Spec.Volumes, corev1.Volume{ - Name: dotnetVolumeName, - VolumeSource: corev1.VolumeSource{ - EmptyDir: &corev1.EmptyDirVolumeSource{ - SizeLimit: volumeSize(dotNetSpec.VolumeSizeLimit), - }, - }}) - + pod.Spec.Volumes = append(pod.Spec.Volumes, volume) pod.Spec.InitContainers = append(pod.Spec.InitContainers, corev1.Container{ Name: dotnetInitContainerName, Image: dotNetSpec.Image, Command: []string{"cp", "-r", "/autoinstrumentation/.", dotnetInstrMountPath}, Resources: dotNetSpec.Resources, VolumeMounts: []corev1.VolumeMount{{ - Name: dotnetVolumeName, + Name: volume.Name, MountPath: dotnetInstrMountPath, }}, }) diff --git a/pkg/instrumentation/exporter.go b/pkg/instrumentation/exporter.go new file mode 100644 index 0000000000..5598de24cf --- /dev/null +++ b/pkg/instrumentation/exporter.go @@ -0,0 +1,150 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package instrumentation + +import ( + "fmt" + "path/filepath" + + corev1 "k8s.io/api/core/v1" + + "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" + "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/constants" +) + +func configureExporter(exporter v1alpha1.Exporter, pod *corev1.Pod, container *corev1.Container) { + if exporter.Endpoint != "" { + if getIndexOfEnv(container.Env, constants.EnvOTELExporterOTLPEndpoint) == -1 { + container.Env = append(container.Env, corev1.EnvVar{ + Name: constants.EnvOTELExporterOTLPEndpoint, + Value: exporter.Endpoint, + }) + } + } + if exporter.TLS == nil { + return + } + // the name cannot be longer than 63 characters + secretVolumeName := naming.Truncate("otel-auto-secret-%s", 63, exporter.TLS.SecretName) + secretMountPath := fmt.Sprintf("/otel-auto-instrumentation-secret-%s", exporter.TLS.SecretName) + configMapVolumeName := naming.Truncate("otel-auto-configmap-%s", 63, exporter.TLS.ConfigMapName) + configMapMountPath := fmt.Sprintf("/otel-auto-instrumentation-configmap-%s", exporter.TLS.ConfigMapName) + + if exporter.TLS.CA != "" { + mountPath := secretMountPath + if exporter.TLS.ConfigMapName != "" { + mountPath = configMapMountPath + } + envVarVal := fmt.Sprintf("%s/%s", mountPath, exporter.TLS.CA) + if filepath.IsAbs(exporter.TLS.CA) { + envVarVal = exporter.TLS.CA + } + if getIndexOfEnv(container.Env, constants.EnvOTELExporterCertificate) == -1 { + container.Env = append(container.Env, corev1.EnvVar{ + Name: constants.EnvOTELExporterCertificate, + Value: envVarVal, + }) + } + } + if exporter.TLS.Cert != "" { + envVarVal := fmt.Sprintf("%s/%s", secretMountPath, exporter.TLS.Cert) + if filepath.IsAbs(exporter.TLS.Cert) { + envVarVal = exporter.TLS.Cert + } + if getIndexOfEnv(container.Env, constants.EnvOTELExporterClientCertificate) == -1 { + container.Env = append(container.Env, corev1.EnvVar{ + Name: constants.EnvOTELExporterClientCertificate, + Value: envVarVal, + }) + } + } + if exporter.TLS.Key != "" { + envVarVar := fmt.Sprintf("%s/%s", secretMountPath, exporter.TLS.Key) + if filepath.IsAbs(exporter.TLS.Key) { + envVarVar = exporter.TLS.Key + } + if getIndexOfEnv(container.Env, constants.EnvOTELExporterClientKey) == -1 { + container.Env = append(container.Env, corev1.EnvVar{ + Name: constants.EnvOTELExporterClientKey, + Value: envVarVar, + }) + } + } + + if exporter.TLS.SecretName != "" { + addVolume := true + for _, vol := range pod.Spec.Volumes { + if vol.Name == secretVolumeName { + addVolume = false + } + } + if addVolume { + pod.Spec.Volumes = append(pod.Spec.Volumes, corev1.Volume{ + Name: secretVolumeName, + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: exporter.TLS.SecretName, + }, + }}) + } + addVolumeMount := true + for _, vol := range container.VolumeMounts { + if vol.Name == secretVolumeName { + addVolumeMount = false + } + } + if addVolumeMount { + container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{ + Name: secretVolumeName, + MountPath: secretMountPath, + ReadOnly: true, + }) + } + } + + if exporter.TLS.ConfigMapName != "" { + addVolume := true + for _, vol := range pod.Spec.Volumes { + if vol.Name == configMapVolumeName { + addVolume = false + } + } + if addVolume { + pod.Spec.Volumes = append(pod.Spec.Volumes, corev1.Volume{ + Name: configMapVolumeName, + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: exporter.TLS.ConfigMapName, + }, + }, + }}) + } + addVolumeMount := true + for _, vol := range container.VolumeMounts { + if vol.Name == configMapVolumeName { + addVolumeMount = false + } + } + if addVolumeMount { + container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{ + Name: configMapVolumeName, + MountPath: configMapMountPath, + ReadOnly: true, + }) + } + } +} diff --git a/pkg/instrumentation/exporter_test.go b/pkg/instrumentation/exporter_test.go new file mode 100644 index 0000000000..2fddf1264a --- /dev/null +++ b/pkg/instrumentation/exporter_test.go @@ -0,0 +1,209 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package instrumentation + +import ( + "testing" + + "github.com/stretchr/testify/assert" + corev1 "k8s.io/api/core/v1" + + "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1" +) + +func TestExporter(t *testing.T) { + tests := []struct { + name string + exporter v1alpha1.Exporter + expected corev1.Pod + }{ + { + name: "ca, crt and key from secret", + exporter: v1alpha1.Exporter{ + Endpoint: "https://collector:4318", + TLS: &v1alpha1.TLS{ + SecretName: "my-certs", + CA: "ca.crt", + Cert: "cert.crt", + Key: "key.key", + }, + }, + expected: corev1.Pod{ + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: "otel-auto-secret-my-certs", + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: "my-certs", + }, + }, + }, + }, + Containers: []corev1.Container{ + { + VolumeMounts: []corev1.VolumeMount{ + { + Name: "otel-auto-secret-my-certs", + ReadOnly: true, + MountPath: "/otel-auto-instrumentation-secret-my-certs", + }, + }, + Env: []corev1.EnvVar{ + { + Name: "OTEL_EXPORTER_OTLP_ENDPOINT", + Value: "https://collector:4318", + }, + { + Name: "OTEL_EXPORTER_OTLP_CERTIFICATE", + Value: "/otel-auto-instrumentation-secret-my-certs/ca.crt", + }, + { + Name: "OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE", + Value: "/otel-auto-instrumentation-secret-my-certs/cert.crt", + }, + { + Name: "OTEL_EXPORTER_OTLP_CLIENT_KEY", + Value: "/otel-auto-instrumentation-secret-my-certs/key.key", + }, + }, + }, + }, + }, + }, + }, + { + name: "crt and key from secret and ca from configmap", + exporter: v1alpha1.Exporter{ + Endpoint: "https://collector:4318", + TLS: &v1alpha1.TLS{ + SecretName: "my-certs", + ConfigMapName: "ca-bundle", + CA: "ca.crt", + Cert: "cert.crt", + Key: "key.key", + }, + }, + expected: corev1.Pod{ + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: "otel-auto-secret-my-certs", + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: "my-certs", + }, + }, + }, + { + Name: "otel-auto-configmap-ca-bundle", + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "ca-bundle", + }, + }, + }, + }, + }, + Containers: []corev1.Container{ + { + VolumeMounts: []corev1.VolumeMount{ + { + Name: "otel-auto-secret-my-certs", + ReadOnly: true, + MountPath: "/otel-auto-instrumentation-secret-my-certs", + }, + { + Name: "otel-auto-configmap-ca-bundle", + ReadOnly: true, + MountPath: "/otel-auto-instrumentation-configmap-ca-bundle", + }, + }, + Env: []corev1.EnvVar{ + { + Name: "OTEL_EXPORTER_OTLP_ENDPOINT", + Value: "https://collector:4318", + }, + { + Name: "OTEL_EXPORTER_OTLP_CERTIFICATE", + Value: "/otel-auto-instrumentation-configmap-ca-bundle/ca.crt", + }, + { + Name: "OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE", + Value: "/otel-auto-instrumentation-secret-my-certs/cert.crt", + }, + { + Name: "OTEL_EXPORTER_OTLP_CLIENT_KEY", + Value: "/otel-auto-instrumentation-secret-my-certs/key.key", + }, + }, + }, + }, + }, + }, + }, + { + name: "ca, crt key absolute paths", + exporter: v1alpha1.Exporter{ + Endpoint: "https://collector:4318", + TLS: &v1alpha1.TLS{ + CA: "/ca.crt", + Cert: "/cert.crt", + Key: "/key.key", + }, + }, + expected: corev1.Pod{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Env: []corev1.EnvVar{ + { + Name: "OTEL_EXPORTER_OTLP_ENDPOINT", + Value: "https://collector:4318", + }, + { + Name: "OTEL_EXPORTER_OTLP_CERTIFICATE", + Value: "/ca.crt", + }, + { + Name: "OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE", + Value: "/cert.crt", + }, + { + Name: "OTEL_EXPORTER_OTLP_CLIENT_KEY", + Value: "/key.key", + }, + }, + }, + }, + }, + }, + }, + } + for _, test := range tests { + t.Run(test.name, func(t *testing.T) { + pod := corev1.Pod{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {}, + }, + }, + } + configureExporter(test.exporter, &pod, &pod.Spec.Containers[0]) + assert.Equal(t, test.expected, pod) + }) + } +} diff --git a/pkg/instrumentation/helper.go b/pkg/instrumentation/helper.go index 1968fe8973..c1d5994853 100644 --- a/pkg/instrumentation/helper.go +++ b/pkg/instrumentation/helper.go @@ -16,6 +16,7 @@ package instrumentation import ( "fmt" + "reflect" "regexp" "sort" "strings" @@ -123,6 +124,28 @@ func isInstrWithoutContainers(inst instrumentationWithContainers) int { return 0 } +// Return volume if defined, otherwise return emptyDir with given name and size limit. +func instrVolume(volumeClaimTemplate corev1.PersistentVolumeClaimTemplate, name string, quantity *resource.Quantity) corev1.Volume { + if !reflect.ValueOf(volumeClaimTemplate).IsZero() { + return corev1.Volume{ + Name: name, + VolumeSource: corev1.VolumeSource{ + Ephemeral: &corev1.EphemeralVolumeSource{ + VolumeClaimTemplate: &volumeClaimTemplate, + }, + }, + } + } + + return corev1.Volume{ + Name: name, + VolumeSource: corev1.VolumeSource{ + EmptyDir: &corev1.EmptyDirVolumeSource{ + SizeLimit: volumeSize(quantity), + }, + }} +} + func volumeSize(quantity *resource.Quantity) *resource.Quantity { if quantity == nil { return &defaultSize diff --git a/pkg/instrumentation/helper_test.go b/pkg/instrumentation/helper_test.go index d852c94a4a..9272d50a47 100644 --- a/pkg/instrumentation/helper_test.go +++ b/pkg/instrumentation/helper_test.go @@ -20,6 +20,7 @@ import ( "github.com/stretchr/testify/assert" corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" "github.com/open-telemetry/opentelemetry-operator/pkg/constants" ) @@ -188,6 +189,92 @@ func TestDuplicatedContainers(t *testing.T) { } } +func TestInstrVolume(t *testing.T) { + tests := []struct { + name string + volume corev1.PersistentVolumeClaimTemplate + volumeName string + quantity *resource.Quantity + expected corev1.Volume + }{ + { + name: "With volume", + volume: corev1.PersistentVolumeClaimTemplate{ + Spec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + }, + }, + volumeName: "default-vol", + quantity: nil, + expected: corev1.Volume{ + Name: "default-vol", + VolumeSource: corev1.VolumeSource{ + Ephemeral: &corev1.EphemeralVolumeSource{ + VolumeClaimTemplate: &corev1.PersistentVolumeClaimTemplate{ + Spec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + }, + }, + }, + }}, + }, + { + name: "With volume size limit", + volume: corev1.PersistentVolumeClaimTemplate{}, + volumeName: "default-vol", + quantity: &defaultVolumeLimitSize, + expected: corev1.Volume{ + Name: "default-vol", + VolumeSource: corev1.VolumeSource{ + EmptyDir: &corev1.EmptyDirVolumeSource{ + SizeLimit: &defaultVolumeLimitSize, + }, + }}, + }, + { + name: "No volume or size limit", + volume: corev1.PersistentVolumeClaimTemplate{}, + volumeName: "default-vol", + quantity: nil, + expected: corev1.Volume{ + Name: "default-vol", + VolumeSource: corev1.VolumeSource{ + EmptyDir: &corev1.EmptyDirVolumeSource{ + SizeLimit: &defaultSize, + }, + }}, + }, + { + name: "With volume and size limit", + volume: corev1.PersistentVolumeClaimTemplate{ + Spec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + }, + }, + volumeName: "default-vol", + quantity: &defaultVolumeLimitSize, + expected: corev1.Volume{ + Name: "default-vol", + VolumeSource: corev1.VolumeSource{ + Ephemeral: &corev1.EphemeralVolumeSource{ + VolumeClaimTemplate: &corev1.PersistentVolumeClaimTemplate{ + Spec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + }, + }, + }, + }}, + }, + } + + for _, test := range tests { + t.Run(test.name, func(t *testing.T) { + res := instrVolume(test.volume, test.volumeName, test.quantity) + assert.Equal(t, test.expected, res) + }) + } +} + func TestInstrWithContainers(t *testing.T) { tests := []struct { name string diff --git a/pkg/instrumentation/javaagent.go b/pkg/instrumentation/javaagent.go index f77d3ae0c3..ef91d296d8 100644 --- a/pkg/instrumentation/javaagent.go +++ b/pkg/instrumentation/javaagent.go @@ -31,6 +31,8 @@ const ( ) func injectJavaagent(javaSpec v1alpha1.Java, pod corev1.Pod, index int) (corev1.Pod, error) { + volume := instrVolume(javaSpec.VolumeClaimTemplate, javaVolumeName, javaSpec.VolumeSizeLimit) + // caller checks if there is at least one container. container := &pod.Spec.Containers[index] @@ -63,27 +65,20 @@ func injectJavaagent(javaSpec v1alpha1.Java, pod corev1.Pod, index int) (corev1. } container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{ - Name: javaVolumeName, + Name: volume.Name, MountPath: javaInstrMountPath, }) // We just inject Volumes and init containers for the first processed container. if isInitContainerMissing(pod, javaInitContainerName) { - pod.Spec.Volumes = append(pod.Spec.Volumes, corev1.Volume{ - Name: javaVolumeName, - VolumeSource: corev1.VolumeSource{ - EmptyDir: &corev1.EmptyDirVolumeSource{ - SizeLimit: volumeSize(javaSpec.VolumeSizeLimit), - }, - }}) - + pod.Spec.Volumes = append(pod.Spec.Volumes, volume) pod.Spec.InitContainers = append(pod.Spec.InitContainers, corev1.Container{ Name: javaInitContainerName, Image: javaSpec.Image, Command: []string{"cp", "/javaagent.jar", javaInstrMountPath + "/javaagent.jar"}, Resources: javaSpec.Resources, VolumeMounts: []corev1.VolumeMount{{ - Name: javaVolumeName, + Name: volume.Name, MountPath: javaInstrMountPath, }}, }) @@ -95,7 +90,7 @@ func injectJavaagent(javaSpec v1alpha1.Java, pod corev1.Pod, index int) (corev1. Command: []string{"cp", "-r", extension.Dir + "/.", javaInstrMountPath + "/extensions"}, Resources: javaSpec.Resources, VolumeMounts: []corev1.VolumeMount{{ - Name: javaVolumeName, + Name: volume.Name, MountPath: javaInstrMountPath, }}, }) diff --git a/pkg/instrumentation/nodejs.go b/pkg/instrumentation/nodejs.go index 655e35ee5f..a3d02ea53d 100644 --- a/pkg/instrumentation/nodejs.go +++ b/pkg/instrumentation/nodejs.go @@ -29,6 +29,8 @@ const ( ) func injectNodeJSSDK(nodeJSSpec v1alpha1.NodeJS, pod corev1.Pod, index int) (corev1.Pod, error) { + volume := instrVolume(nodeJSSpec.VolumeClaimTemplate, nodejsVolumeName, nodeJSSpec.VolumeSizeLimit) + // caller checks if there is at least one container. container := &pod.Spec.Containers[index] @@ -56,27 +58,20 @@ func injectNodeJSSDK(nodeJSSpec v1alpha1.NodeJS, pod corev1.Pod, index int) (cor } container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{ - Name: nodejsVolumeName, + Name: volume.Name, MountPath: nodejsInstrMountPath, }) // We just inject Volumes and init containers for the first processed container if isInitContainerMissing(pod, nodejsInitContainerName) { - pod.Spec.Volumes = append(pod.Spec.Volumes, corev1.Volume{ - Name: nodejsVolumeName, - VolumeSource: corev1.VolumeSource{ - EmptyDir: &corev1.EmptyDirVolumeSource{ - SizeLimit: volumeSize(nodeJSSpec.VolumeSizeLimit), - }, - }}) - + pod.Spec.Volumes = append(pod.Spec.Volumes, volume) pod.Spec.InitContainers = append(pod.Spec.InitContainers, corev1.Container{ Name: nodejsInitContainerName, Image: nodeJSSpec.Image, Command: []string{"cp", "-r", "/autoinstrumentation/.", nodejsInstrMountPath}, Resources: nodeJSSpec.Resources, VolumeMounts: []corev1.VolumeMount{{ - Name: nodejsVolumeName, + Name: volume.Name, MountPath: nodejsInstrMountPath, }}, }) diff --git a/pkg/instrumentation/podmutator.go b/pkg/instrumentation/podmutator.go index b1a2356d04..6e17f0fa49 100644 --- a/pkg/instrumentation/podmutator.go +++ b/pkg/instrumentation/podmutator.go @@ -22,6 +22,7 @@ import ( "github.com/go-logr/logr" corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/types" "k8s.io/client-go/tools/record" @@ -320,6 +321,7 @@ func (pm *instPodMutator) Mutate(ctx context.Context, ns corev1.Namespace, pod c } if pm.config.EnablePythonAutoInstrumentation() || inst == nil { insts.Python.Instrumentation = inst + insts.Python.AdditionalAnnotations = map[string]string{annotationPythonPlatform: annotationValue(ns.ObjectMeta, pod.ObjectMeta, annotationPythonPlatform)} } else { logger.Error(nil, "support for Python auto instrumentation is not enabled") pm.Recorder.Event(pod.DeepCopy(), "Warning", "InstrumentationRequestRejected", "support for Python auto instrumentation is not enabled") @@ -395,6 +397,11 @@ func (pm *instPodMutator) Mutate(ctx context.Context, ns corev1.Namespace, pod c return pod, err } + if err = pm.validateInstrumentations(ctx, insts, ns.Name); err != nil { + logger.Error(err, "failed to validate instrumentations") + return pod, err + } + // We retrieve the annotation for podname if pm.config.EnableMultiInstrumentation() { err = insts.setLanguageSpecificContainers(ns.ObjectMeta, pod.ObjectMeta) @@ -460,3 +467,55 @@ func (pm *instPodMutator) selectInstrumentationInstanceFromNamespace(ctx context return &otelInsts.Items[0], nil } } + +func (pm *instPodMutator) validateInstrumentations(ctx context.Context, inst languageInstrumentations, podNamespace string) error { + instrumentations := []struct { + instrumentation *v1alpha1.Instrumentation + }{ + {inst.Java.Instrumentation}, + {inst.Python.Instrumentation}, + {inst.NodeJS.Instrumentation}, + {inst.DotNet.Instrumentation}, + {inst.Go.Instrumentation}, + {inst.ApacheHttpd.Instrumentation}, + {inst.Nginx.Instrumentation}, + {inst.Sdk.Instrumentation}, + } + var errs []error + for _, i := range instrumentations { + if i.instrumentation != nil { + if err := pm.validateInstrumentation(ctx, i.instrumentation, podNamespace); err != nil { + errs = append(errs, err) + } + } + } + + if len(errs) > 0 { + return errors.Join(errs...) + } + return nil +} + +func (pm *instPodMutator) validateInstrumentation(ctx context.Context, inst *v1alpha1.Instrumentation, podNamespace string) error { + // Check if secret and configmap exists + // If they don't exist pod cannot start + var errs []error + if inst.Spec.Exporter.TLS != nil { + if inst.Spec.Exporter.TLS.SecretName != "" { + nsn := types.NamespacedName{Name: inst.Spec.Exporter.TLS.SecretName, Namespace: podNamespace} + if err := pm.Client.Get(ctx, nsn, &corev1.Secret{}); apierrors.IsNotFound(err) { + errs = append(errs, fmt.Errorf("secret %s with certificates does not exists: %w", nsn.String(), err)) + } + } + if inst.Spec.Exporter.TLS.ConfigMapName != "" { + nsn := types.NamespacedName{Name: inst.Spec.Exporter.TLS.ConfigMapName, Namespace: podNamespace} + if err := pm.Client.Get(ctx, nsn, &corev1.ConfigMap{}); apierrors.IsNotFound(err) { + errs = append(errs, fmt.Errorf("configmap %s with CA certificate does not exists: %w", nsn.String(), err)) + } + } + } + if len(errs) > 0 { + return errors.Join(errs...) + } + return nil +} diff --git a/pkg/instrumentation/podmutator_test.go b/pkg/instrumentation/podmutator_test.go index 2eddd045f3..3fd085d539 100644 --- a/pkg/instrumentation/podmutator_test.go +++ b/pkg/instrumentation/podmutator_test.go @@ -42,6 +42,8 @@ func TestMutatePod(t *testing.T) { expected corev1.Pod inst v1alpha1.Instrumentation ns corev1.Namespace + secret *corev1.Secret + configMap *corev1.ConfigMap setFeatureGates func(t *testing.T) config config.Config }{ @@ -52,6 +54,18 @@ func TestMutatePod(t *testing.T) { Name: "javaagent", }, }, + secret: &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "my-certs", + Namespace: "javaagent", + }, + }, + configMap: &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "my-ca-bundle", + Namespace: "javaagent", + }, + }, inst: v1alpha1.Instrumentation{ ObjectMeta: metav1.ObjectMeta{ Name: "example-inst", @@ -103,6 +117,13 @@ func TestMutatePod(t *testing.T) { }, Exporter: v1alpha1.Exporter{ Endpoint: "http://collector:12345", + TLS: &v1alpha1.TLS{ + SecretName: "my-certs", + ConfigMapName: "my-ca-bundle", + CA: "ca.crt", + Cert: "cert.crt", + Key: "key.key", + }, }, }, }, @@ -136,6 +157,24 @@ func TestMutatePod(t *testing.T) { }, }, }, + { + Name: "otel-auto-secret-my-certs", + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: "my-certs", + }, + }, + }, + { + Name: "otel-auto-configmap-my-ca-bundle", + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "my-ca-bundle", + }, + }, + }, + }, }, InitContainers: []corev1.Container{ { @@ -212,6 +251,18 @@ func TestMutatePod(t *testing.T) { Name: "OTEL_SERVICE_NAME", Value: "app", }, + { + Name: "OTEL_EXPORTER_OTLP_CERTIFICATE", + Value: "/otel-auto-instrumentation-configmap-my-ca-bundle/ca.crt", + }, + { + Name: "OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE", + Value: "/otel-auto-instrumentation-secret-my-certs/cert.crt", + }, + { + Name: "OTEL_EXPORTER_OTLP_CLIENT_KEY", + Value: "/otel-auto-instrumentation-secret-my-certs/key.key", + }, { Name: "OTEL_RESOURCE_ATTRIBUTES_POD_NAME", ValueFrom: &corev1.EnvVarSource{ @@ -238,6 +289,16 @@ func TestMutatePod(t *testing.T) { Name: javaVolumeName, MountPath: javaInstrMountPath, }, + { + Name: "otel-auto-secret-my-certs", + ReadOnly: true, + MountPath: "/otel-auto-instrumentation-secret-my-certs", + }, + { + Name: "otel-auto-configmap-my-ca-bundle", + ReadOnly: true, + MountPath: "/otel-auto-instrumentation-configmap-my-ca-bundle", + }, }, }, }, @@ -1197,6 +1258,10 @@ func TestMutatePod(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, { Name: "OTEL_EXPORTER_OTLP_ENDPOINT", Value: "http://localhost:4318", @@ -1300,6 +1365,10 @@ func TestMutatePod(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, { Name: "OTEL_EXPORTER_OTLP_ENDPOINT", Value: "http://localhost:4318", @@ -1393,6 +1462,10 @@ func TestMutatePod(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, { Name: "OTEL_EXPORTER_OTLP_ENDPOINT", Value: "http://localhost:4318", @@ -1501,6 +1574,10 @@ func TestMutatePod(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, { Name: "OTEL_EXPORTER_OTLP_ENDPOINT", Value: "http://localhost:4318", @@ -1592,6 +1669,10 @@ func TestMutatePod(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, { Name: "OTEL_EXPORTER_OTLP_ENDPOINT", Value: "http://localhost:4318", @@ -1685,6 +1766,10 @@ func TestMutatePod(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, { Name: "OTEL_EXPORTER_OTLP_ENDPOINT", Value: "http://localhost:4318", @@ -2862,7 +2947,7 @@ func TestMutatePod(t *testing.T) { Image: "otel/apache-httpd:1", Command: []string{"/bin/sh", "-c"}, Args: []string{ - "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo 'Include /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, + "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo -e '\nInclude /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, Env: []corev1.EnvVar{ { Name: apacheAttributesEnvVar, @@ -4064,6 +4149,10 @@ func TestMutatePod(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, { Name: "OTEL_SERVICE_NAME", Value: "python1", @@ -4139,6 +4228,10 @@ func TestMutatePod(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, { Name: "OTEL_SERVICE_NAME", Value: "python2", @@ -4785,6 +4878,49 @@ func TestMutatePod(t *testing.T) { config.WithEnableNodeJSInstrumentation(false), ), }, + { + name: "secret and configmap does not exists", + ns: corev1.Namespace{ + ObjectMeta: metav1.ObjectMeta{ + Name: "error-missing-secrets", + }, + }, + inst: v1alpha1.Instrumentation{ + ObjectMeta: metav1.ObjectMeta{ + Name: "example-inst", + Namespace: "error-missing-secrets", + }, + Spec: v1alpha1.InstrumentationSpec{ + Exporter: v1alpha1.Exporter{ + Endpoint: "http://collector:12345", + TLS: &v1alpha1.TLS{ + SecretName: "my-certs", + ConfigMapName: "my-ca-bundle", + CA: "ca.crt", + Cert: "cert.crt", + Key: "key.key", + }, + }, + }, + }, + pod: corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Annotations: map[string]string{ + annotationInjectJava: "true", + }, + Namespace: "error-missing-secrets", + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "app", + }, + }, + }, + }, + config: config.New(), + err: "secret error-missing-secrets/my-certs with certificates does not exists: secrets \"my-certs\" not found\nconfigmap error-missing-secrets/my-ca-bundle with CA certificate does not exists: configmaps \"my-ca-bundle\" not found", + }, } for _, test := range tests { @@ -4801,6 +4937,21 @@ func TestMutatePod(t *testing.T) { defer func() { _ = k8sClient.Delete(context.Background(), &test.ns) }() + if test.secret != nil { + err = k8sClient.Create(context.Background(), test.secret) + require.NoError(t, err) + defer func() { + _ = k8sClient.Delete(context.Background(), test.secret) + }() + } + if test.configMap != nil { + err = k8sClient.Create(context.Background(), test.configMap) + require.NoError(t, err) + defer func() { + _ = k8sClient.Delete(context.Background(), test.configMap) + }() + } + err = k8sClient.Create(context.Background(), &test.inst) require.NoError(t, err) diff --git a/pkg/instrumentation/python.go b/pkg/instrumentation/python.go index d3cfc51ca4..e39e757052 100644 --- a/pkg/instrumentation/python.go +++ b/pkg/instrumentation/python.go @@ -23,18 +23,25 @@ import ( ) const ( - envPythonPath = "PYTHONPATH" - envOtelTracesExporter = "OTEL_TRACES_EXPORTER" - envOtelMetricsExporter = "OTEL_METRICS_EXPORTER" - envOtelExporterOTLPProtocol = "OTEL_EXPORTER_OTLP_PROTOCOL" - pythonPathPrefix = "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation" - pythonPathSuffix = "/otel-auto-instrumentation-python" - pythonInstrMountPath = "/otel-auto-instrumentation-python" - pythonVolumeName = volumeName + "-python" - pythonInitContainerName = initContainerName + "-python" + envPythonPath = "PYTHONPATH" + envOtelTracesExporter = "OTEL_TRACES_EXPORTER" + envOtelMetricsExporter = "OTEL_METRICS_EXPORTER" + envOtelLogsExporter = "OTEL_LOGS_EXPORTER" + envOtelExporterOTLPProtocol = "OTEL_EXPORTER_OTLP_PROTOCOL" + glibcLinuxAutoInstrumentationSrc = "/autoinstrumentation/." + muslLinuxAutoInstrumentationSrc = "/autoinstrumentation-musl/." + pythonPathPrefix = "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation" + pythonPathSuffix = "/otel-auto-instrumentation-python" + pythonInstrMountPath = "/otel-auto-instrumentation-python" + pythonVolumeName = volumeName + "-python" + pythonInitContainerName = initContainerName + "-python" + glibcLinux = "glibc" + muslLinux = "musl" ) -func injectPythonSDK(pythonSpec v1alpha1.Python, pod corev1.Pod, index int) (corev1.Pod, error) { +func injectPythonSDK(pythonSpec v1alpha1.Python, pod corev1.Pod, index int, platform string) (corev1.Pod, error) { + volume := instrVolume(pythonSpec.VolumeClaimTemplate, pythonVolumeName, pythonSpec.VolumeSizeLimit) + // caller checks if there is at least one container. container := &pod.Spec.Containers[index] @@ -43,6 +50,16 @@ func injectPythonSDK(pythonSpec v1alpha1.Python, pod corev1.Pod, index int) (cor return pod, err } + autoInstrumentationSrc := "" + switch platform { + case "", glibcLinux: + autoInstrumentationSrc = glibcLinuxAutoInstrumentationSrc + case muslLinux: + autoInstrumentationSrc = muslLinuxAutoInstrumentationSrc + default: + return pod, fmt.Errorf("provided instrumentation.opentelemetry.io/otel-python-platform annotation value '%s' is not supported", platform) + } + // inject Python instrumentation spec env vars. for _, env := range pythonSpec.Env { idx := getIndexOfEnv(container.Env, env.Name) @@ -70,7 +87,7 @@ func injectPythonSDK(pythonSpec v1alpha1.Python, pod corev1.Pod, index int) (cor }) } - // Set OTEL_TRACES_EXPORTER to HTTP exporter if not set by user because it is what our autoinstrumentation supports. + // Set OTEL_TRACES_EXPORTER to otlp exporter if not set by user because it is what our autoinstrumentation supports. idx = getIndexOfEnv(container.Env, envOtelTracesExporter) if idx == -1 { container.Env = append(container.Env, corev1.EnvVar{ @@ -79,7 +96,7 @@ func injectPythonSDK(pythonSpec v1alpha1.Python, pod corev1.Pod, index int) (cor }) } - // Set OTEL_METRICS_EXPORTER to HTTP exporter if not set by user because it is what our autoinstrumentation supports. + // Set OTEL_METRICS_EXPORTER to otlp exporter if not set by user because it is what our autoinstrumentation supports. idx = getIndexOfEnv(container.Env, envOtelMetricsExporter) if idx == -1 { container.Env = append(container.Env, corev1.EnvVar{ @@ -88,28 +105,30 @@ func injectPythonSDK(pythonSpec v1alpha1.Python, pod corev1.Pod, index int) (cor }) } + // Set OTEL_LOGS_EXPORTER to otlp exporter if not set by user because it is what our autoinstrumentation supports. + idx = getIndexOfEnv(container.Env, envOtelLogsExporter) + if idx == -1 { + container.Env = append(container.Env, corev1.EnvVar{ + Name: envOtelLogsExporter, + Value: "otlp", + }) + } + container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{ - Name: pythonVolumeName, + Name: volume.Name, MountPath: pythonInstrMountPath, }) // We just inject Volumes and init containers for the first processed container. if isInitContainerMissing(pod, pythonInitContainerName) { - pod.Spec.Volumes = append(pod.Spec.Volumes, corev1.Volume{ - Name: pythonVolumeName, - VolumeSource: corev1.VolumeSource{ - EmptyDir: &corev1.EmptyDirVolumeSource{ - SizeLimit: volumeSize(pythonSpec.VolumeSizeLimit), - }, - }}) - + pod.Spec.Volumes = append(pod.Spec.Volumes, volume) pod.Spec.InitContainers = append(pod.Spec.InitContainers, corev1.Container{ Name: pythonInitContainerName, Image: pythonSpec.Image, - Command: []string{"cp", "-r", "/autoinstrumentation/.", pythonInstrMountPath}, + Command: []string{"cp", "-r", autoInstrumentationSrc, pythonInstrMountPath}, Resources: pythonSpec.Resources, VolumeMounts: []corev1.VolumeMount{{ - Name: pythonVolumeName, + Name: volume.Name, MountPath: pythonInstrMountPath, }}, }) diff --git a/pkg/instrumentation/python_test.go b/pkg/instrumentation/python_test.go index 2ced01bb07..2347dc480b 100644 --- a/pkg/instrumentation/python_test.go +++ b/pkg/instrumentation/python_test.go @@ -29,6 +29,7 @@ func TestInjectPythonSDK(t *testing.T) { name string v1alpha1.Python pod corev1.Pod + platform string expected corev1.Pod err error }{ @@ -42,6 +43,7 @@ func TestInjectPythonSDK(t *testing.T) { }, }, }, + platform: "glibc", expected: corev1.Pod{ Spec: corev1.PodSpec{ Volumes: []corev1.Volume{ @@ -90,6 +92,10 @@ func TestInjectPythonSDK(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, }, }, }, @@ -114,6 +120,7 @@ func TestInjectPythonSDK(t *testing.T) { }, }, }, + platform: "glibc", expected: corev1.Pod{ Spec: corev1.PodSpec{ Volumes: []corev1.Volume{ @@ -163,6 +170,10 @@ func TestInjectPythonSDK(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, }, }, }, @@ -187,6 +198,7 @@ func TestInjectPythonSDK(t *testing.T) { }, }, }, + platform: "glibc", expected: corev1.Pod{ Spec: corev1.PodSpec{ Volumes: []corev1.Volume{ @@ -235,6 +247,10 @@ func TestInjectPythonSDK(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, }, }, }, @@ -259,6 +275,7 @@ func TestInjectPythonSDK(t *testing.T) { }, }, }, + platform: "glibc", expected: corev1.Pod{ Spec: corev1.PodSpec{ Volumes: []corev1.Volume{ @@ -307,6 +324,86 @@ func TestInjectPythonSDK(t *testing.T) { Name: "OTEL_TRACES_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, + }, + }, + }, + }, + }, + err: nil, + }, + { + name: "OTEL_LOGS_EXPORTER defined", + Python: v1alpha1.Python{Image: "foo/bar:1"}, + pod: corev1.Pod{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Env: []corev1.EnvVar{ + { + Name: "OTEL_LOGS_EXPORTER", + Value: "somebackend", + }, + }, + }, + }, + }, + }, + expected: corev1.Pod{ + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: "opentelemetry-auto-instrumentation-python", + VolumeSource: corev1.VolumeSource{ + EmptyDir: &corev1.EmptyDirVolumeSource{ + SizeLimit: &defaultVolumeLimitSize, + }, + }, + }, + }, + InitContainers: []corev1.Container{ + { + Name: "opentelemetry-auto-instrumentation-python", + Image: "foo/bar:1", + Command: []string{"cp", "-r", "/autoinstrumentation/.", "/otel-auto-instrumentation-python"}, + VolumeMounts: []corev1.VolumeMount{{ + Name: "opentelemetry-auto-instrumentation-python", + MountPath: "/otel-auto-instrumentation-python", + }}, + }, + }, + Containers: []corev1.Container{ + { + VolumeMounts: []corev1.VolumeMount{ + { + Name: "opentelemetry-auto-instrumentation-python", + MountPath: "/otel-auto-instrumentation-python", + }, + }, + Env: []corev1.EnvVar{ + { + Name: "OTEL_LOGS_EXPORTER", + Value: "somebackend", + }, + { + Name: "PYTHONPATH", + Value: fmt.Sprintf("%s:%s", "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation", "/otel-auto-instrumentation-python"), + }, + { + Name: "OTEL_EXPORTER_OTLP_PROTOCOL", + Value: "http/protobuf", + }, + { + Name: "OTEL_TRACES_EXPORTER", + Value: "otlp", + }, + { + Name: "OTEL_METRICS_EXPORTER", + Value: "otlp", + }, }, }, }, @@ -331,6 +428,7 @@ func TestInjectPythonSDK(t *testing.T) { }, }, }, + platform: "glibc", expected: corev1.Pod{ Spec: corev1.PodSpec{ Volumes: []corev1.Volume{ @@ -379,6 +477,10 @@ func TestInjectPythonSDK(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, }, }, }, @@ -403,6 +505,7 @@ func TestInjectPythonSDK(t *testing.T) { }, }, }, + platform: "glibc", expected: corev1.Pod{ Spec: corev1.PodSpec{ Containers: []corev1.Container{ @@ -419,11 +522,171 @@ func TestInjectPythonSDK(t *testing.T) { }, err: fmt.Errorf("the container defines env var value via ValueFrom, envVar: %s", envPythonPath), }, + { + name: "musl platform defined", + Python: v1alpha1.Python{Image: "foo/bar:1"}, + pod: corev1.Pod{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {}, + }, + }, + }, + platform: "musl", + expected: corev1.Pod{ + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: pythonVolumeName, + VolumeSource: corev1.VolumeSource{ + EmptyDir: &corev1.EmptyDirVolumeSource{ + SizeLimit: &defaultVolumeLimitSize, + }, + }, + }, + }, + InitContainers: []corev1.Container{ + { + Name: "opentelemetry-auto-instrumentation-python", + Image: "foo/bar:1", + Command: []string{"cp", "-r", "/autoinstrumentation-musl/.", "/otel-auto-instrumentation-python"}, + VolumeMounts: []corev1.VolumeMount{{ + Name: "opentelemetry-auto-instrumentation-python", + MountPath: "/otel-auto-instrumentation-python", + }}, + }, + }, + Containers: []corev1.Container{ + { + VolumeMounts: []corev1.VolumeMount{ + { + Name: "opentelemetry-auto-instrumentation-python", + MountPath: "/otel-auto-instrumentation-python", + }, + }, + Env: []corev1.EnvVar{ + { + Name: "PYTHONPATH", + Value: fmt.Sprintf("%s:%s", "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation", "/otel-auto-instrumentation-python"), + }, + { + Name: "OTEL_EXPORTER_OTLP_PROTOCOL", + Value: "http/protobuf", + }, + { + Name: "OTEL_TRACES_EXPORTER", + Value: "otlp", + }, + { + Name: "OTEL_METRICS_EXPORTER", + Value: "otlp", + }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, + }, + }, + }, + }, + }, + err: nil, + }, + { + name: "platform not defined", + Python: v1alpha1.Python{Image: "foo/bar:1"}, + pod: corev1.Pod{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {}, + }, + }, + }, + platform: "", + expected: corev1.Pod{ + Spec: corev1.PodSpec{ + Volumes: []corev1.Volume{ + { + Name: pythonVolumeName, + VolumeSource: corev1.VolumeSource{ + EmptyDir: &corev1.EmptyDirVolumeSource{ + SizeLimit: &defaultVolumeLimitSize, + }, + }, + }, + }, + InitContainers: []corev1.Container{ + { + Name: "opentelemetry-auto-instrumentation-python", + Image: "foo/bar:1", + Command: []string{"cp", "-r", "/autoinstrumentation/.", "/otel-auto-instrumentation-python"}, + VolumeMounts: []corev1.VolumeMount{{ + Name: "opentelemetry-auto-instrumentation-python", + MountPath: "/otel-auto-instrumentation-python", + }}, + }, + }, + Containers: []corev1.Container{ + { + VolumeMounts: []corev1.VolumeMount{ + { + Name: "opentelemetry-auto-instrumentation-python", + MountPath: "/otel-auto-instrumentation-python", + }, + }, + Env: []corev1.EnvVar{ + { + Name: "PYTHONPATH", + Value: fmt.Sprintf("%s:%s", "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation", "/otel-auto-instrumentation-python"), + }, + { + Name: "OTEL_EXPORTER_OTLP_PROTOCOL", + Value: "http/protobuf", + }, + { + Name: "OTEL_TRACES_EXPORTER", + Value: "otlp", + }, + { + Name: "OTEL_METRICS_EXPORTER", + Value: "otlp", + }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, + }, + }, + }, + }, + }, + err: nil, + }, + { + name: "platform not supported", + Python: v1alpha1.Python{Image: "foo/bar:1"}, + pod: corev1.Pod{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {}, + }, + }, + }, + platform: "not-supported", + expected: corev1.Pod{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {}, + }, + }, + }, + err: fmt.Errorf("provided instrumentation.opentelemetry.io/otel-python-platform annotation value 'not-supported' is not supported"), + }, } for _, test := range tests { t.Run(test.name, func(t *testing.T) { - pod, err := injectPythonSDK(test.Python, test.pod, 0) + pod, err := injectPythonSDK(test.Python, test.pod, 0, test.platform) assert.Equal(t, test.expected, pod) assert.Equal(t, test.err, err) }) diff --git a/pkg/instrumentation/sdk.go b/pkg/instrumentation/sdk.go index a9f7d9bbfd..87141a1cf8 100644 --- a/pkg/instrumentation/sdk.go +++ b/pkg/instrumentation/sdk.go @@ -110,7 +110,7 @@ func (i *sdkInjector) inject(ctx context.Context, insts languageInstrumentations for _, container := range insts.Python.Containers { index := getContainerIndex(container, pod) - pod, err = injectPythonSDK(otelinst.Spec.Python, pod, index) + pod, err = injectPythonSDK(otelinst.Spec.Python, pod, index, insts.Python.AdditionalAnnotations[annotationPythonPlatform]) if err != nil { i.logger.Info("Skipping Python SDK injection", "reason", err.Error(), "container", pod.Spec.Containers[index].Name) } else { @@ -304,15 +304,7 @@ func (i *sdkInjector) injectCommonSDKConfig(ctx context.Context, otelinst v1alph Value: chooseServiceName(pod, useLabelsForResourceAttributes, resourceMap, appIndex), }) } - if otelinst.Spec.Exporter.Endpoint != "" { - idx = getIndexOfEnv(container.Env, constants.EnvOTELExporterOTLPEndpoint) - if idx == -1 { - container.Env = append(container.Env, corev1.EnvVar{ - Name: constants.EnvOTELExporterOTLPEndpoint, - Value: otelinst.Spec.Endpoint, - }) - } - } + configureExporter(otelinst.Spec.Exporter, &pod, container) // Always retrieve the pod name from the Downward API. Ensure that the OTEL_RESOURCE_ATTRIBUTES_POD_NAME env exists. container.Env = append(container.Env, corev1.EnvVar{ diff --git a/pkg/instrumentation/sdk_test.go b/pkg/instrumentation/sdk_test.go index c3abedac04..04f9826807 100644 --- a/pkg/instrumentation/sdk_test.go +++ b/pkg/instrumentation/sdk_test.go @@ -1261,6 +1261,10 @@ func TestInjectPython(t *testing.T) { Name: "OTEL_METRICS_EXPORTER", Value: "otlp", }, + { + Name: "OTEL_LOGS_EXPORTER", + Value: "otlp", + }, { Name: "OTEL_SERVICE_NAME", Value: "app", @@ -1836,7 +1840,7 @@ func TestInjectApacheHttpd(t *testing.T) { Image: "img:1", Command: []string{"/bin/sh", "-c"}, Args: []string{ - "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo 'Include /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, + "cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo \"/opt/opentelemetry-webserver/agent/logs\" | sed 's,/,\\\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo \"$OTEL_APACHE_AGENT_CONF\" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo -e '\nInclude /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf"}, Env: []corev1.EnvVar{ { Name: apacheAttributesEnvVar, diff --git a/pkg/sidecar/pod.go b/pkg/sidecar/pod.go index d7db13484c..d7a99918df 100644 --- a/pkg/sidecar/pod.go +++ b/pkg/sidecar/pod.go @@ -17,6 +17,7 @@ package sidecar import ( "fmt" + "slices" "github.com/go-logr/logr" corev1 "k8s.io/api/core/v1" @@ -25,6 +26,7 @@ import ( "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) const ( @@ -47,7 +49,17 @@ func add(cfg config.Config, logger logr.Logger, otelcol v1beta1.OpenTelemetryCol container.Env = append(container.Env, attributes...) } pod.Spec.InitContainers = append(pod.Spec.InitContainers, otelcol.Spec.InitContainers...) - pod.Spec.Containers = append(pod.Spec.Containers, container) + + if featuregate.EnableNativeSidecarContainers.IsEnabled() { + policy := corev1.ContainerRestartPolicyAlways + container.RestartPolicy = &policy + // NOTE: Use ReadinessProbe as startup probe. + // See https://github.com/open-telemetry/opentelemetry-operator/pull/2801#discussion_r1547571121 + container.StartupProbe = container.ReadinessProbe + pod.Spec.InitContainers = append(pod.Spec.InitContainers, container) + } else { + pod.Spec.Containers = append(pod.Spec.Containers, container) + } pod.Spec.Volumes = append(pod.Spec.Volumes, otelcol.Spec.Volumes...) if pod.Labels == nil { @@ -58,26 +70,34 @@ func add(cfg config.Config, logger logr.Logger, otelcol v1beta1.OpenTelemetryCol return pod, nil } +func isOtelColContainer(c corev1.Container) bool { return c.Name == naming.Container() } + // remove the sidecar container from the given pod. func remove(pod corev1.Pod) corev1.Pod { if !existsIn(pod) { return pod } - var containers []corev1.Container - for _, container := range pod.Spec.Containers { - if container.Name != naming.Container() { - containers = append(containers, container) - } + pod.Spec.Containers = slices.DeleteFunc(pod.Spec.Containers, isOtelColContainer) + + if featuregate.EnableNativeSidecarContainers.IsEnabled() { + // NOTE: we also remove init containers (native sidecars) since k8s 1.28. + // This should have no side effects. + pod.Spec.InitContainers = slices.DeleteFunc(pod.Spec.InitContainers, isOtelColContainer) } - pod.Spec.Containers = containers return pod } // existsIn checks whether a sidecar container exists in the given pod. func existsIn(pod corev1.Pod) bool { - for _, container := range pod.Spec.Containers { - if container.Name == naming.Container() { + if slices.ContainsFunc(pod.Spec.Containers, isOtelColContainer) { + return true + } + + if featuregate.EnableNativeSidecarContainers.IsEnabled() { + // NOTE: we also check init containers (native sidecars) since k8s 1.28. + // This should have no side effects. + if slices.ContainsFunc(pod.Spec.InitContainers, isOtelColContainer) { return true } } diff --git a/pkg/sidecar/pod_test.go b/pkg/sidecar/pod_test.go index c941961181..58c0de9841 100644 --- a/pkg/sidecar/pod_test.go +++ b/pkg/sidecar/pod_test.go @@ -19,6 +19,7 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + colfeaturegate "go.opentelemetry.io/collector/featuregate" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" logf "sigs.k8s.io/controller-runtime/pkg/log" @@ -26,10 +27,99 @@ import ( "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1" "github.com/open-telemetry/opentelemetry-operator/internal/config" "github.com/open-telemetry/opentelemetry-operator/internal/naming" + "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate" ) var logger = logf.Log.WithName("unit-tests") +func enableSidecarFeatureGate(t *testing.T) { + originalVal := featuregate.EnableNativeSidecarContainers.IsEnabled() + t.Logf("original is: %+v", originalVal) + require.NoError(t, colfeaturegate.GlobalRegistry().Set(featuregate.EnableNativeSidecarContainers.ID(), true)) + t.Cleanup(func() { + require.NoError(t, colfeaturegate.GlobalRegistry().Set(featuregate.EnableNativeSidecarContainers.ID(), originalVal)) + }) +} + +func TestAddNativeSidecar(t *testing.T) { + enableSidecarFeatureGate(t) + // prepare + pod := corev1.Pod{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "my-app"}, + }, + InitContainers: []corev1.Container{ + { + Name: "my-init", + }, + }, + // cross-test: the pod has a volume already, make sure we don't remove it + Volumes: []corev1.Volume{{}}, + }, + } + + otelcol := v1beta1.OpenTelemetryCollector{ + ObjectMeta: metav1.ObjectMeta{ + Name: "otelcol-native-sidecar", + Namespace: "some-app", + }, + Spec: v1beta1.OpenTelemetryCollectorSpec{ + Mode: v1beta1.ModeSidecar, + OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{ + InitContainers: []corev1.Container{ + { + Name: "test", + }, + }, + }, + }, + } + + otelcolYaml, err := otelcol.Spec.Config.Yaml() + require.NoError(t, err) + cfg := config.New(config.WithCollectorImage("some-default-image")) + + // test + changed, err := add(cfg, logger, otelcol, pod, nil) + + // verify + assert.NoError(t, err) + require.Len(t, changed.Spec.Containers, 1) + require.Len(t, changed.Spec.InitContainers, 3) + require.Len(t, changed.Spec.Volumes, 1) + assert.Equal(t, "some-app.otelcol-native-sidecar", + changed.Labels["sidecar.opentelemetry.io/injected"]) + expectedPolicy := corev1.ContainerRestartPolicyAlways + assert.Equal(t, corev1.Container{ + Name: "otc-container", + Image: "some-default-image", + Args: []string{"--config=env:OTEL_CONFIG"}, + RestartPolicy: &expectedPolicy, + Env: []corev1.EnvVar{ + { + Name: "POD_NAME", + ValueFrom: &corev1.EnvVarSource{ + FieldRef: &corev1.ObjectFieldSelector{ + FieldPath: "metadata.name", + }, + }, + }, + { + Name: "OTEL_CONFIG", + Value: string(otelcolYaml), + }, + }, + Ports: []corev1.ContainerPort{ + { + Name: "metrics", + ContainerPort: 8888, + Protocol: corev1.ProtocolTCP, + }, + }, + }, changed.Spec.InitContainers[2]) +} + func TestAddSidecarWhenNoSidecarExists(t *testing.T) { // prepare pod := corev1.Pod{ @@ -146,6 +236,11 @@ func TestRemoveSidecar(t *testing.T) { {Name: naming.Container()}, {Name: naming.Container()}, // two sidecars! should remove both }, + InitContainers: []corev1.Container{ + {Name: "something"}, + {Name: naming.Container()}, // NOTE: native sidecar since k8s 1.28. + {Name: naming.Container()}, // two sidecars! should remove both + }, }, } @@ -174,6 +269,8 @@ func TestRemoveNonExistingSidecar(t *testing.T) { } func TestExistsIn(t *testing.T) { + enableSidecarFeatureGate(t) + for _, tt := range []struct { desc string pod corev1.Pod @@ -190,6 +287,19 @@ func TestExistsIn(t *testing.T) { }, true}, + {"does-have-native-sidecar", + corev1.Pod{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "my-app"}, + }, + InitContainers: []corev1.Container{ + {Name: naming.Container()}, + }, + }, + }, + true}, + {"does-not-have-sidecar", corev1.Pod{ Spec: corev1.PodSpec{ diff --git a/tests/e2e-automatic-rbac/extra-permissions-operator/clusterresourcequotas.yaml b/tests/e2e-automatic-rbac/extra-permissions-operator/clusterresourcequotas.yaml new file mode 100644 index 0000000000..89cd1ed2f4 --- /dev/null +++ b/tests/e2e-automatic-rbac/extra-permissions-operator/clusterresourcequotas.yaml @@ -0,0 +1,11 @@ +- op: add + path: /rules/- + value: + apiGroups: + - quota.openshift.io + resources: + - clusterresourcequotas + verbs: + - get + - list + - watch \ No newline at end of file diff --git a/tests/e2e-automatic-rbac/extra-permissions-operator/cronjobs.yaml b/tests/e2e-automatic-rbac/extra-permissions-operator/cronjobs.yaml new file mode 100644 index 0000000000..f1f0638831 --- /dev/null +++ b/tests/e2e-automatic-rbac/extra-permissions-operator/cronjobs.yaml @@ -0,0 +1,12 @@ +--- +- op: add + path: /rules/- + value: + apiGroups: + - batch + resources: + - cronjobs + verbs: + - get + - list + - watch diff --git a/tests/e2e-automatic-rbac/extra-permissions-operator/daemonsets.yaml b/tests/e2e-automatic-rbac/extra-permissions-operator/daemonsets.yaml new file mode 100644 index 0000000000..545e68e502 --- /dev/null +++ b/tests/e2e-automatic-rbac/extra-permissions-operator/daemonsets.yaml @@ -0,0 +1,11 @@ +- op: add + path: /rules/- + value: + apiGroups: + - extensions + resources: + - daemonsets + verbs: + - get + - list + - watch diff --git a/tests/e2e-automatic-rbac/extra-permissions-operator/events.yaml b/tests/e2e-automatic-rbac/extra-permissions-operator/events.yaml new file mode 100644 index 0000000000..ee15613b79 --- /dev/null +++ b/tests/e2e-automatic-rbac/extra-permissions-operator/events.yaml @@ -0,0 +1,11 @@ +- op: add + path: /rules/- + value: + apiGroups: + - "" + resources: + - events + verbs: + - get + - list + - watch diff --git a/tests/e2e-automatic-rbac/extra-permissions-operator/extensions.yaml b/tests/e2e-automatic-rbac/extra-permissions-operator/extensions.yaml new file mode 100644 index 0000000000..3b3273b448 --- /dev/null +++ b/tests/e2e-automatic-rbac/extra-permissions-operator/extensions.yaml @@ -0,0 +1,13 @@ +--- +- op: add + path: /rules/- + value: + apiGroups: + - extensions + resources: + - deployments + - replicasets + verbs: + - get + - list + - watch diff --git a/tests/e2e-automatic-rbac/extra-permissions-operator/namespaces-status.yaml b/tests/e2e-automatic-rbac/extra-permissions-operator/namespaces-status.yaml new file mode 100644 index 0000000000..0575128574 --- /dev/null +++ b/tests/e2e-automatic-rbac/extra-permissions-operator/namespaces-status.yaml @@ -0,0 +1,11 @@ +- op: add + path: /rules/- + value: + apiGroups: + - "" + resources: + - namespaces/status + verbs: + - get + - list + - watch diff --git a/tests/e2e-automatic-rbac/extra-permissions-operator/nodes-proxy.yaml b/tests/e2e-automatic-rbac/extra-permissions-operator/nodes-proxy.yaml new file mode 100644 index 0000000000..81919cd9b1 --- /dev/null +++ b/tests/e2e-automatic-rbac/extra-permissions-operator/nodes-proxy.yaml @@ -0,0 +1,11 @@ +--- +- op: add + path: /rules/- + value: + apiGroups: + - "" + resources: + - nodes/stats + - nodes/proxy + verbs: + - get diff --git a/tests/e2e-automatic-rbac/extra-permissions-operator/nodes-spec.yaml b/tests/e2e-automatic-rbac/extra-permissions-operator/nodes-spec.yaml new file mode 100644 index 0000000000..d8a9242aea --- /dev/null +++ b/tests/e2e-automatic-rbac/extra-permissions-operator/nodes-spec.yaml @@ -0,0 +1,12 @@ +--- +- op: add + path: /rules/- + value: + apiGroups: + - "" + resources: + - nodes/spec + verbs: + - get + - list + - watch diff --git a/tests/e2e-automatic-rbac/extra-permissions-operator/pod-status.yaml b/tests/e2e-automatic-rbac/extra-permissions-operator/pod-status.yaml new file mode 100644 index 0000000000..c12a947b47 --- /dev/null +++ b/tests/e2e-automatic-rbac/extra-permissions-operator/pod-status.yaml @@ -0,0 +1,12 @@ +--- +- op: add + path: /rules/- + value: + apiGroups: + - "" + resources: + - pods/status + verbs: + - get + - list + - watch diff --git a/tests/e2e-automatic-rbac/extra-permissions-operator/replicationcontrollers.yaml b/tests/e2e-automatic-rbac/extra-permissions-operator/replicationcontrollers.yaml new file mode 100644 index 0000000000..793ebd289b --- /dev/null +++ b/tests/e2e-automatic-rbac/extra-permissions-operator/replicationcontrollers.yaml @@ -0,0 +1,12 @@ +- op: add + path: /rules/- + value: + apiGroups: + - "" + resources: + - replicationcontrollers + - replicationcontrollers/status + verbs: + - get + - list + - watch diff --git a/tests/e2e-automatic-rbac/extra-permissions-operator/resourcequotas.yaml b/tests/e2e-automatic-rbac/extra-permissions-operator/resourcequotas.yaml new file mode 100644 index 0000000000..f529640c25 --- /dev/null +++ b/tests/e2e-automatic-rbac/extra-permissions-operator/resourcequotas.yaml @@ -0,0 +1,11 @@ +- op: add + path: /rules/- + value: + apiGroups: + - "" + resources: + - resourcequotas + verbs: + - get + - list + - watch diff --git a/tests/e2e-automatic-rbac/receiver-k8scluster/00-install.yaml b/tests/e2e-automatic-rbac/receiver-k8scluster/00-install.yaml new file mode 100644 index 0000000000..36737528f0 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8scluster/00-install.yaml @@ -0,0 +1,4 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: chainsaw-k8s-cluster diff --git a/tests/e2e-automatic-rbac/receiver-k8scluster/01-assert.yaml b/tests/e2e-automatic-rbac/receiver-k8scluster/01-assert.yaml new file mode 100644 index 0000000000..eefc9620c0 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8scluster/01-assert.yaml @@ -0,0 +1,80 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: simplest-chainsaw-k8s-cluster-cluster-role +rules: +- apiGroups: + - "" + resources: + - events + - namespaces + - namespaces/status + - nodes + - nodes/spec + - pods + - pods/status + - replicationcontrollers + - replicationcontrollers/status + - resourcequotas + - services + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - daemonsets + - deployments + - replicasets + - statefulsets + verbs: + - get + - list + - watch +- apiGroups: + - extensions + resources: + - daemonsets + - deployments + - replicasets + verbs: + - get + - list + - watch +- apiGroups: + - batch + resources: + - jobs + - cronjobs + verbs: + - get + - list + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - get + - list + - watch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: chainsaw-k8s-cluster.simplest + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: simplest-chainsaw-k8s-cluster-collector + app.kubernetes.io/part-of: opentelemetry + name: simplest-chainsaw-k8s-cluster-collector +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: simplest-chainsaw-k8s-cluster-cluster-role +subjects: +- kind: ServiceAccount + name: simplest-collector + namespace: chainsaw-k8s-cluster diff --git a/tests/e2e-automatic-rbac/receiver-k8scluster/01-install.yaml b/tests/e2e-automatic-rbac/receiver-k8scluster/01-install.yaml new file mode 100644 index 0000000000..2cdc575046 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8scluster/01-install.yaml @@ -0,0 +1,18 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: simplest + namespace: chainsaw-k8s-cluster +spec: + config: | + receivers: + k8s_cluster: + processors: + exporters: + debug: + service: + pipelines: + traces: + receivers: [k8s_cluster] + processors: [] + exporters: [debug] diff --git a/tests/e2e-automatic-rbac/receiver-k8scluster/02-assert.yaml b/tests/e2e-automatic-rbac/receiver-k8scluster/02-assert.yaml new file mode 100644 index 0000000000..e95ce23092 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8scluster/02-assert.yaml @@ -0,0 +1,88 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: simplest-chainsaw-k8s-cluster-cluster-role +rules: +- apiGroups: + - "" + resources: + - events + - namespaces + - namespaces/status + - nodes + - nodes/spec + - pods + - pods/status + - replicationcontrollers + - replicationcontrollers/status + - resourcequotas + - services + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - daemonsets + - deployments + - replicasets + - statefulsets + verbs: + - get + - list + - watch +- apiGroups: + - extensions + resources: + - daemonsets + - deployments + - replicasets + verbs: + - get + - list + - watch +- apiGroups: + - batch + resources: + - jobs + - cronjobs + verbs: + - get + - list + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - get + - list + - watch +- apiGroups: + - quota.openshift.io + resources: + - clusterresourcequotas + verbs: + - get + - list + - watch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: chainsaw-k8s-cluster.simplest + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: simplest-chainsaw-k8s-cluster-collector + app.kubernetes.io/part-of: opentelemetry + name: simplest-chainsaw-k8s-cluster-collector +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: simplest-chainsaw-k8s-cluster-cluster-role +subjects: +- kind: ServiceAccount + name: simplest-collector + namespace: chainsaw-k8s-cluster diff --git a/tests/e2e-automatic-rbac/receiver-k8scluster/02-install.yaml b/tests/e2e-automatic-rbac/receiver-k8scluster/02-install.yaml new file mode 100644 index 0000000000..984cef98fe --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8scluster/02-install.yaml @@ -0,0 +1,19 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: simplest + namespace: chainsaw-k8s-cluster +spec: + config: | + receivers: + k8s_cluster: + distribution: openshift + processors: + exporters: + debug: + service: + pipelines: + traces: + receivers: [k8s_cluster] + processors: [] + exporters: [debug] diff --git a/tests/e2e-automatic-rbac/receiver-k8sevents/00-install.yaml b/tests/e2e-automatic-rbac/receiver-k8sevents/00-install.yaml new file mode 100644 index 0000000000..fb47fe3810 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8sevents/00-install.yaml @@ -0,0 +1,4 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: chainsaw-k8s-events diff --git a/tests/e2e-automatic-rbac/receiver-k8sevents/01-assert.yaml b/tests/e2e-automatic-rbac/receiver-k8sevents/01-assert.yaml new file mode 100644 index 0000000000..59440d2ba7 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8sevents/01-assert.yaml @@ -0,0 +1,80 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: simplest-chainsaw-k8s-events-cluster-role +rules: +- apiGroups: + - "" + resources: + - events + - namespaces + - namespaces/status + - nodes + - nodes/spec + - pods + - pods/status + - replicationcontrollers + - replicationcontrollers/status + - resourcequotas + - services + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - daemonsets + - deployments + - replicasets + - statefulsets + verbs: + - get + - list + - watch +- apiGroups: + - extensions + resources: + - daemonsets + - deployments + - replicasets + verbs: + - get + - list + - watch +- apiGroups: + - batch + resources: + - jobs + - cronjobs + verbs: + - get + - list + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - get + - list + - watch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: chainsaw-k8s-events.simplest + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: simplest-chainsaw-k8s-events-collector + app.kubernetes.io/part-of: opentelemetry + name: simplest-chainsaw-k8s-events-collector +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: simplest-chainsaw-k8s-events-cluster-role +subjects: +- kind: ServiceAccount + name: simplest-collector + namespace: chainsaw-k8s-events diff --git a/tests/e2e-automatic-rbac/receiver-k8sevents/01-install.yaml b/tests/e2e-automatic-rbac/receiver-k8sevents/01-install.yaml new file mode 100644 index 0000000000..4de742cc52 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8sevents/01-install.yaml @@ -0,0 +1,18 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: simplest + namespace: chainsaw-k8s-events +spec: + config: | + receivers: + k8s_events: + processors: + exporters: + debug: + service: + pipelines: + traces: + receivers: [k8s_events] + processors: [] + exporters: [debug] diff --git a/tests/e2e-automatic-rbac/receiver-k8sevents/chainsaw-test.yaml b/tests/e2e-automatic-rbac/receiver-k8sevents/chainsaw-test.yaml new file mode 100644 index 0000000000..3dc42480ea --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8sevents/chainsaw-test.yaml @@ -0,0 +1,18 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + creationTimestamp: null + name: receiver-k8sevents +spec: + steps: + - name: create-namespace + try: + - apply: + file: 00-install.yaml + - name: default-config + try: + - apply: + file: 01-install.yaml + - assert: + file: 01-assert.yaml diff --git a/tests/e2e-automatic-rbac/receiver-k8sobjects/00-install.yaml b/tests/e2e-automatic-rbac/receiver-k8sobjects/00-install.yaml new file mode 100644 index 0000000000..76e8a59449 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8sobjects/00-install.yaml @@ -0,0 +1,4 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: chainsaw-k8sobjects diff --git a/tests/e2e-automatic-rbac/receiver-k8sobjects/01-assert.yaml b/tests/e2e-automatic-rbac/receiver-k8sobjects/01-assert.yaml new file mode 100644 index 0000000000..5542960bbb --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8sobjects/01-assert.yaml @@ -0,0 +1,31 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: simplest-chainsaw-k8sobjects-cluster-role +rules: +- apiGroups: + - "" + resources: + - pods + verbs: + - list + - get +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: chainsaw-k8sobjects.simplest + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: simplest-chainsaw-k8sobjects-collector + app.kubernetes.io/part-of: opentelemetry + name: simplest-chainsaw-k8sobjects-collector +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: simplest-chainsaw-k8sobjects-cluster-role +subjects: +- kind: ServiceAccount + name: simplest-collector + namespace: chainsaw-k8sobjects diff --git a/tests/e2e-automatic-rbac/receiver-k8sobjects/01-install.yaml b/tests/e2e-automatic-rbac/receiver-k8sobjects/01-install.yaml new file mode 100644 index 0000000000..fde02268ff --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8sobjects/01-install.yaml @@ -0,0 +1,22 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: simplest + namespace: chainsaw-k8sobjects +spec: + config: + receivers: + k8sobjects: + auth_type: serviceAccount + objects: + - name: pods + mode: pull + processors: + exporters: + debug: + service: + pipelines: + traces: + receivers: [k8sobjects] + processors: [] + exporters: [debug] diff --git a/tests/e2e-automatic-rbac/receiver-k8sobjects/chainsaw-test.yaml b/tests/e2e-automatic-rbac/receiver-k8sobjects/chainsaw-test.yaml new file mode 100644 index 0000000000..0cc38d9945 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-k8sobjects/chainsaw-test.yaml @@ -0,0 +1,18 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + creationTimestamp: null + name: receiver-k8sobjects +spec: + steps: + - name: create-namespace + try: + - apply: + file: 00-install.yaml + - name: pod-pull-config + try: + - apply: + file: 01-install.yaml + - assert: + file: 01-assert.yaml diff --git a/tests/e2e-automatic-rbac/receiver-kubeletstats/00-install.yaml b/tests/e2e-automatic-rbac/receiver-kubeletstats/00-install.yaml new file mode 100644 index 0000000000..919491411b --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-kubeletstats/00-install.yaml @@ -0,0 +1,4 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: chainsaw-kubeletstats \ No newline at end of file diff --git a/tests/e2e-automatic-rbac/receiver-kubeletstats/01-assert.yaml b/tests/e2e-automatic-rbac/receiver-kubeletstats/01-assert.yaml new file mode 100644 index 0000000000..64e07dbd6d --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-kubeletstats/01-assert.yaml @@ -0,0 +1,48 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: simplest-chainsaw-kubeletstats-cluster-role +rules: +- apiGroups: [""] + resources: ["nodes/stats"] + verbs: ["get"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: chainsaw-kubeletstats.simplest + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: simplest-chainsaw-kubeletstats-collector + app.kubernetes.io/part-of: opentelemetry + name: simplest-chainsaw-kubeletstats-collector +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: simplest-chainsaw-kubeletstats-cluster-role +subjects: +- kind: ServiceAccount + name: simplest-collector + namespace: chainsaw-kubeletstats +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: chainsaw-kubeletstats.simplest + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: simplest-collector + app.kubernetes.io/part-of: opentelemetry + app.kubernetes.io/version: latest + namespace: chainsaw-kubeletstats +spec: + containers: + - name: otc-container + env: + - name: POD_NAME + - name: K8S_NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName diff --git a/tests/e2e-automatic-rbac/receiver-kubeletstats/01-install.yaml b/tests/e2e-automatic-rbac/receiver-kubeletstats/01-install.yaml new file mode 100644 index 0000000000..027d213129 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-kubeletstats/01-install.yaml @@ -0,0 +1,19 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: simplest + namespace: chainsaw-kubeletstats +spec: + config: | + receivers: + kubeletstats: + auth_type: "" + processors: + exporters: + debug: + service: + pipelines: + traces: + receivers: [kubeletstats] + processors: [] + exporters: [debug] diff --git a/tests/e2e-automatic-rbac/receiver-kubeletstats/02-assert.yaml b/tests/e2e-automatic-rbac/receiver-kubeletstats/02-assert.yaml new file mode 100644 index 0000000000..bf5b707020 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-kubeletstats/02-assert.yaml @@ -0,0 +1,30 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: simplest-chainsaw-kubeletstats-cluster-role +rules: +- apiGroups: [""] + resources: ["nodes/stats"] + verbs: ["get"] +- apiGroups: [""] + resources: ["nodes/proxy"] + verbs: ["get"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: chainsaw-kubeletstats.simplest + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: simplest-chainsaw-kubeletstats-collector + app.kubernetes.io/part-of: opentelemetry + name: simplest-chainsaw-kubeletstats-collector +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: simplest-chainsaw-kubeletstats-cluster-role +subjects: +- kind: ServiceAccount + name: simplest-collector + namespace: chainsaw-kubeletstats diff --git a/tests/e2e-automatic-rbac/receiver-kubeletstats/02-install.yaml b/tests/e2e-automatic-rbac/receiver-kubeletstats/02-install.yaml new file mode 100644 index 0000000000..8452f05228 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-kubeletstats/02-install.yaml @@ -0,0 +1,20 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: simplest + namespace: chainsaw-kubeletstats +spec: + config: | + receivers: + kubeletstats: + extra_metadata_labels: + - container.id + processors: + exporters: + debug: + service: + pipelines: + traces: + receivers: [kubeletstats] + processors: [] + exporters: [debug] diff --git a/tests/e2e-automatic-rbac/receiver-kubeletstats/03-assert.yaml b/tests/e2e-automatic-rbac/receiver-kubeletstats/03-assert.yaml new file mode 100644 index 0000000000..bf5b707020 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-kubeletstats/03-assert.yaml @@ -0,0 +1,30 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: simplest-chainsaw-kubeletstats-cluster-role +rules: +- apiGroups: [""] + resources: ["nodes/stats"] + verbs: ["get"] +- apiGroups: [""] + resources: ["nodes/proxy"] + verbs: ["get"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: chainsaw-kubeletstats.simplest + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: simplest-chainsaw-kubeletstats-collector + app.kubernetes.io/part-of: opentelemetry + name: simplest-chainsaw-kubeletstats-collector +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: simplest-chainsaw-kubeletstats-cluster-role +subjects: +- kind: ServiceAccount + name: simplest-collector + namespace: chainsaw-kubeletstats diff --git a/tests/e2e-automatic-rbac/receiver-kubeletstats/03-install.yaml b/tests/e2e-automatic-rbac/receiver-kubeletstats/03-install.yaml new file mode 100644 index 0000000000..8452f05228 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-kubeletstats/03-install.yaml @@ -0,0 +1,20 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: simplest + namespace: chainsaw-kubeletstats +spec: + config: | + receivers: + kubeletstats: + extra_metadata_labels: + - container.id + processors: + exporters: + debug: + service: + pipelines: + traces: + receivers: [kubeletstats] + processors: [] + exporters: [debug] diff --git a/tests/e2e-automatic-rbac/receiver-kubeletstats/chainsaw-test.yaml b/tests/e2e-automatic-rbac/receiver-kubeletstats/chainsaw-test.yaml new file mode 100644 index 0000000000..f693722ed3 --- /dev/null +++ b/tests/e2e-automatic-rbac/receiver-kubeletstats/chainsaw-test.yaml @@ -0,0 +1,30 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + creationTimestamp: null + name: receiver-kubeletstats +spec: + steps: + - name: create-namespace + try: + - apply: + file: 00-install.yaml + - name: default-config + try: + - apply: + file: 01-install.yaml + - assert: + file: 01-assert.yaml + - name: use-extra_metadata_labels + try: + - apply: + file: 02-install.yaml + - assert: + file: 02-assert.yaml + - name: k8snode-detector + try: + - apply: + file: 03-install.yaml + - assert: + file: 03-assert.yaml \ No newline at end of file diff --git a/tests/e2e-instrumentation/instrumentation-java-tls/.gitignore b/tests/e2e-instrumentation/instrumentation-java-tls/.gitignore new file mode 100644 index 0000000000..b8987f0ba0 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-java-tls/.gitignore @@ -0,0 +1,2 @@ +*.crt +*.key \ No newline at end of file diff --git a/tests/e2e-instrumentation/instrumentation-java-tls/00-install-collector.yaml b/tests/e2e-instrumentation/instrumentation-java-tls/00-install-collector.yaml new file mode 100644 index 0000000000..536c392481 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-java-tls/00-install-collector.yaml @@ -0,0 +1,43 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: simplest +spec: + volumeMounts: + - name: certs + mountPath: /certs + - name: certs-ca + mountPath: /certs-ca + volumes: + - name: certs + secret: + secretName: server-certs + - name: certs-ca + configMap: + name: ca + config: + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + tls: + cert_file: /certs/tls.crt + key_file: /certs/tls.key + client_ca_file: /certs-ca/ca.crt + http: + endpoint: 0.0.0.0:4318 + tls: + cert_file: /certs/tls.crt + key_file: /certs/tls.key + client_ca_file: /certs-ca/ca.crt + processors: + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e-instrumentation/instrumentation-java-tls/00-install-instrumentation.yaml b/tests/e2e-instrumentation/instrumentation-java-tls/00-install-instrumentation.yaml new file mode 100644 index 0000000000..7bc75d7107 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-java-tls/00-install-instrumentation.yaml @@ -0,0 +1,19 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: Instrumentation +metadata: + name: java +spec: + exporter: + endpoint: https://simplest-collector:4317 + tls: + secretName: client-certs + configMapName: ca + ca_file: ca.crt + cert_file: tls.crt + key_file: tls.key + propagators: + - tracecontext + - baggage + sampler: + type: parentbased_traceidratio + argument: "1" diff --git a/tests/e2e-instrumentation/instrumentation-java-tls/01-assert.yaml b/tests/e2e-instrumentation/instrumentation-java-tls/01-assert.yaml new file mode 100644 index 0000000000..7ddecadb47 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-java-tls/01-assert.yaml @@ -0,0 +1,70 @@ +apiVersion: v1 +kind: Pod +metadata: + annotations: + instrumentation.opentelemetry.io/inject-java: "true" + labels: + app: my-java +spec: + containers: + - env: + - name: OTEL_NODE_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + - name: OTEL_POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: JAVA_TOOL_OPTIONS + value: ' -javaagent:/otel-auto-instrumentation-java/javaagent.jar' + - name: OTEL_SERVICE_NAME + value: my-java + - name: OTEL_EXPORTER_OTLP_ENDPOINT + value: https://simplest-collector:4317 + - name: OTEL_EXPORTER_OTLP_CERTIFICATE + value: /otel-auto-instrumentation-configmap-ca/ca.crt + - name: OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE + value: /otel-auto-instrumentation-secret-client-certs/tls.crt + - name: OTEL_EXPORTER_OTLP_CLIENT_KEY + value: /otel-auto-instrumentation-secret-client-certs/tls.key + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_PROPAGATORS + value: tracecontext,baggage + - name: OTEL_TRACES_SAMPLER + value: parentbased_traceidratio + - name: OTEL_TRACES_SAMPLER_ARG + value: "1" + - name: OTEL_RESOURCE_ATTRIBUTES + name: myapp + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + readOnly: true + - mountPath: /otel-auto-instrumentation-java + name: opentelemetry-auto-instrumentation-java + - mountPath: /otel-auto-instrumentation-secret-client-certs + name: otel-auto-secret-client-certs + readOnly: true + - mountPath: /otel-auto-instrumentation-configmap-ca + name: otel-auto-configmap-ca + readOnly: true + initContainers: + - name: opentelemetry-auto-instrumentation-java +status: + containerStatuses: + - name: myapp + ready: true + started: true + initContainerStatuses: + - name: opentelemetry-auto-instrumentation-java + ready: true + phase: Running diff --git a/tests/e2e-instrumentation/instrumentation-java-tls/01-install-app.yaml b/tests/e2e-instrumentation/instrumentation-java-tls/01-install-app.yaml new file mode 100644 index 0000000000..0d16826e53 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-java-tls/01-install-app.yaml @@ -0,0 +1,27 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-java +spec: + selector: + matchLabels: + app: my-java + replicas: 1 + template: + metadata: + labels: + app: my-java + annotations: + instrumentation.opentelemetry.io/inject-java: "true" + spec: + securityContext: + runAsUser: 1000 + runAsGroup: 3000 + fsGroup: 3000 + containers: + - name: myapp + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-java:main + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] diff --git a/tests/e2e-instrumentation/instrumentation-java-tls/ca.yaml b/tests/e2e-instrumentation/instrumentation-java-tls/ca.yaml new file mode 100644 index 0000000000..c078708fd6 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-java-tls/ca.yaml @@ -0,0 +1,30 @@ +apiVersion: v1 +data: + ca.crt: | + -----BEGIN CERTIFICATE----- + MIID3zCCAsegAwIBAgIUbgTamPDD9mF7SzjykOtjZ6eOJygwDQYJKoZIhvcNAQEL + BQAwfjELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcM + DU1vdW50YWluIFZpZXcxGjAYBgNVBAoMEVlvdXIgT3JnYW5pemF0aW9uMRIwEAYD + VQQLDAlZb3VyIFVuaXQxEjAQBgNVBAMMCWxvY2FsaG9zdDAgFw0yNDEwMTAxMjQw + MTFaGA8yMDUxMDMxMzEyNDAxMVowfjELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNh + bGlmb3JuaWExFjAUBgNVBAcMDU1vdW50YWluIFZpZXcxGjAYBgNVBAoMEVlvdXIg + T3JnYW5pemF0aW9uMRIwEAYDVQQLDAlZb3VyIFVuaXQxEjAQBgNVBAMMCWxvY2Fs + aG9zdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMax8x9QrIB924Tn + J+GhOsvEU6DDTbntLS8rXy7ePeCrUgjh+E3ThzvdZFqqx8ffVmrDVd8SF9TabXWC + j4Bytyv1AxBN8+PviXjyDeF5qSYEzh9K9poJCnTPOXZcToEna0Q5Po41fFY/M5QL + 7YHBrlc4rJKd+CJmQ0bjUj1OjG0NBT2Xm0rU1o92+73CMb//ADd8XkqDunHMfILe + wyWDiTbXsgXuh62cdmQyAL98xH0ghSrGYM2KA/F9FvD51B2+CDs2YwET4IsRTAt+ + 9nLJpjrN7o+lofnhGWy88wPwlzJZeMP3oyna2iVlemXXYZeYXv2uRN6DCLUaamXT + sy2sawECAwEAAaNTMFEwHQYDVR0OBBYEFI7foDRaBz788AJJcAo0wC422LDUMB8G + A1UdIwQYMBaAFI7foDRaBz788AJJcAo0wC422LDUMA8GA1UdEwEB/wQFMAMBAf8w + DQYJKoZIhvcNAQELBQADggEBAIyVPNo2vsiRoqeJjaDCUSJFzop4ykdQOsOUMeJT + UqiJvH87unmEm50QgGOwsSxYPZkaPosxjnIFs9lVXixIcETtqbb8DT2AU9muDJ4o + 2p8tYBD/4jTN0I6waEpsubMwz+U4llxyfCG0UK3/6kpFwi8/723i8LwzynwkMiki + gtAPGmo1QwMFW/2w24l/+Uo4dhrq3GpuV2qBwyYc04z88abvAzRy/wIdw0IC4DiO + nNNN1SsjAeN+wp1dm0ohDm4z5d60O9CiTtggizzONJ8tln9SkyN6fCvpArgp9xxD + vChKkZiGSJlRoql1k8nRvHBaPZ9e3L8MEw7LgrkPSgleaNI= + -----END CERTIFICATE----- +kind: ConfigMap +metadata: + creationTimestamp: null + name: ca diff --git a/tests/e2e-instrumentation/instrumentation-java-tls/chainsaw-test.yaml b/tests/e2e-instrumentation/instrumentation-java-tls/chainsaw-test.yaml new file mode 100755 index 0000000000..f552743fa3 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-java-tls/chainsaw-test.yaml @@ -0,0 +1,46 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + creationTimestamp: null + name: instrumentation-java-tls +spec: + steps: + - name: step-00 + try: + # In OpenShift, when a namespace is created, all necessary SCC annotations are automatically added. However, if a namespace is created using a resource file with only selected SCCs, the other auto-added SCCs are not included. Therefore, the UID-range and supplemental groups SCC annotations must be set after the namespace is created. + - command: + entrypoint: kubectl + args: + - annotate + - namespace + - ${NAMESPACE} + - openshift.io/sa.scc.uid-range=1000/1000 + - --overwrite + - command: + entrypoint: kubectl + args: + - annotate + - namespace + - ${NAMESPACE} + - openshift.io/sa.scc.supplemental-groups=3000/3000 + - --overwrite + - apply: + file: ca.yaml + - apply: + file: client-secret.yaml + - apply: + file: server-secret.yaml + - apply: + file: 00-install-collector.yaml + - apply: + file: 00-install-instrumentation.yaml + - name: step-01 + try: + - apply: + file: 01-install-app.yaml + - assert: + file: 01-assert.yaml + catch: + - podLogs: + selector: app=my-java \ No newline at end of file diff --git a/tests/e2e-instrumentation/instrumentation-java-tls/client-secret.yaml b/tests/e2e-instrumentation/instrumentation-java-tls/client-secret.yaml new file mode 100644 index 0000000000..d038b02d89 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-java-tls/client-secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +data: + tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ2VENDQXRHZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREIrTVFzd0NRWURWUVFHRXdKVlV6RVQKTUJFR0ExVUVDQXdLUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnd3TlRXOTFiblJoYVc0Z1ZtbGxkekVhTUJnRwpBMVVFQ2d3UldXOTFjaUJQY21kaGJtbDZZWFJwYjI0eEVqQVFCZ05WQkFzTUNWbHZkWElnVlc1cGRERVNNQkFHCkExVUVBd3dKYkc5allXeG9iM04wTUNBWERUSTBNVEF4TURFeU5EQXhNVm9ZRHpJd05URXdNekV6TVRJME1ERXgKV2pDQm1qRUxNQWtHQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdNQ2tOaGJHbG1iM0p1YVdFeEZqQVVCZ05WQkFjTQpEVTF2ZFc1MFlXbHVJRlpwWlhjeEdqQVlCZ05WQkFvTUVWbHZkWElnVDNKbllXNXBlbUYwYVc5dU1SSXdFQVlEClZRUUxEQWxaYjNWeUlGVnVhWFF4R2pBWUJnTlZCQU1NRVhOMll5NWpiSFZ6ZEdWeUxteHZZMkZzTVJJd0VBWUQKVlFRRERBbHNiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDeApNSzljZDgvb0tHL0FGaTZUS2tsYmNzdlNSS0dTQUkvU2NSNVJQU3FoVW5sQ1NITEhpdXU0TDhSS202RGhGekVaCngvVVppSlREU3BWTllKcHpzZ1B5YkxWU1NQL2k5SDVqWGc3NW5MbUNUNmtRYWV5NG5EUjBZMEp3ZkdaaDB0eWwKMnpMMUlZdnVCYkZQTjMwRVp0bE82WVN3eWtqYjMwcjl3eFhrS1lsMFlCcEJFbkpqYyt5SW44OW52amNtTHgvVApkd3JPTlNXc2QzdXJpdXJNMWFWQlFjd09tMG8rMG9aVFdJY2daa2hPM0cvV29uZHBnSVl2OWF6dWYrL2craWs1ClQwZXBQR3RJSUsrajNqN3lGb3lkMFVONmprb2hyclV1bUFZRmF5UkdkRjhxNzc5dXIxZ0hIcXozRkE0YVY3aEoKd2JWNTNLV0ZCb2hLZEtqQkdVcHBBZ01CQUFHalV6QlJNQjBHQTFVZERnUVdCQlFhK2NpYTZvaS9RenhtZm43RgpiQnhOWTF0bXpEQWZCZ05WSFNNRUdEQVdnQlNPMzZBMFdnYysvUEFDU1hBS05NQXVOdGl3MURBUEJnTlZIUk1CCkFmOEVCVEFEQVFIL01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ0xvYTFIWnd5aGRQMyt4cXZYUzN5Z0FlbDAKMTdXWEpuM0FocndsajRxUFpvR2NrT0FYY1RoYm9ZaExITnV1ZVpPRkpIWFd4dU5lZDR2Y2xONEVocFdrWVRZTApodUtDN1IvVFVXVnAxMnh1SzNsdTBMZ3ZacUQ5bW05bTlXUW02eVJ5dmZCT0JBc3BLYXdtd0ljS0NLa3RCUnRpClRBaWxxNy9zOVZKRnFtSWNWWEg5bGtIaWNBMUROUjhMc2JSUWJtMUZOSCt4eGd1bTd6cURJbGZlZzhFMkg3WVIKVnRpNXZmZmRUUHVqZHlGaWZJbHR0cmw0MThwVDNVWEJ3UWFGRkVJSHlFdjlDamI5RFRMWDMrTDdlaWVYdXNrcgpxUkZzVjFIa3Arc1IwMTVNckxpUU9uOThjYUlKcG4yN3JOaU9GeVU2b3R3eTR0WGtmd0RTNlJLMkV5T24KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ3hNSzljZDgvb0tHL0EKRmk2VEtrbGJjc3ZTUktHU0FJL1NjUjVSUFNxaFVubENTSExIaXV1NEw4UkttNkRoRnpFWngvVVppSlREU3BWTgpZSnB6c2dQeWJMVlNTUC9pOUg1alhnNzVuTG1DVDZrUWFleTRuRFIwWTBKd2ZHWmgwdHlsMnpMMUlZdnVCYkZQCk4zMEVadGxPNllTd3lramIzMHI5d3hYa0tZbDBZQnBCRW5KamMreUluODludmpjbUx4L1Rkd3JPTlNXc2QzdXIKaXVyTTFhVkJRY3dPbTBvKzBvWlRXSWNnWmtoTzNHL1dvbmRwZ0lZdjlhenVmKy9nK2lrNVQwZXBQR3RJSUsragozajd5Rm95ZDBVTjZqa29ocnJVdW1BWUZheVJHZEY4cTc3OXVyMWdISHF6M0ZBNGFWN2hKd2JWNTNLV0ZCb2hLCmRLakJHVXBwQWdNQkFBRUNnZ0VBQ2ozZzZQbnc2R3BMN3FEKzcxa3luOGlqeTRNcTNwNzRob2FzejFwM0tCZDEKdEZVdWlKRDQvUzYwOXh0YlFoOXp6UVJ2NEVVSUtqM3U5dEpydUY1cUF0TEhRVVZyRmFjaHEyU0YvanRtNy9JTgpvNG45THVlSDB2d09IS1V5eXNVNWVwUmxYb3kzbUpvTXpMZDRSYTlITU1hU2VhQ0dTUlUxUndrd3hGSjZwRGExCml2am1neXZJRkp5L3RWMGxQSStUWHpnRzdzdkI1Y01CZDFVNjJXN2VkNDBDQWF6bmg1R09FZHZ1YmFaYmZmSWsKZ2huZ0N0ak5EU0p1Rk5ma1ZXdWNnTmErejVFK2dFYjZBdHJBMEhtZEtVR1V2UkRrNUhiVGZJR09ka0hzcnF5UApSSlk0WndFcEFxMWZhY2FxcWVwMXZsVU5vSmNNTVJhc29VOTZGU0lzelFLQmdRRFpCRFBUa0lQeFFNYXF1c3lDCmw5UkpRVy9OZGpuaDl3enRwOUhYWXNTcUEyU3VQQWVPMGdpWWdBOC9MZUpPTFhYelc0dWp2blhLcENveUNiQ3oKc0IvUm9MeTkrWENiODIzdDlKSTJvczZpbFhGbUo4S21OTUlQdExHVG82T1RJNlNseUNJMUFHRzdtdkVhVWY2SQpmanJaL25pY0c0Tlo5TU5rNVg0M2J5UjUzUUtCZ1FEUkJRU3VNWlNyVFFTMGpWbFZQaGxWdHRhV29vbTE5cFRjCmpjYS9vRTZ0Z2RRb2dUalVEZDlDRmtJZ1VxYmNpYlo0ZitnUmhHNHF4VkxwMDBxR0JLWFdYOXlBQTBtMmpNa3MKZkFDbTdZbFNvTElLNkY2M0FuWk9Kck5ETWYyejd1WWthWDBQRk4rTXZsLzRiQTFYMTFEcFRSdG4vL2QyLzBuMwpTeW1LWnVJWC9RS0JnRkh5TjB1OU4wVmpLMkdPdGVqZVFpZ0RVSjlwOUVOeVVXeHdRVm11andxUHkzWExieU1zCkJsam5pbHBXRGkxdEZ5djB0bzczUFcxdWZneDFBa2RueXl3U0lSTXZYS2xXeTN6ZUxGUDdPRUhHWXBLcmt1SEYKN0QyWUFySDRTYTBtK1dZc1kxWldOWkZzMlh3UjJDWmNYQWF6QTRJWEZZdGpWR0VHRTVvRkd1WDFBb0dBTTVXWAplQjRJWU5aYktPd1JkZllqYm9IM0o2bnBicHp5VkJReFRxMlRmVUtqUjNQTXdKakQxcDJEcUZKOWw4UHM0b1ErCms4UXBKQ2thczFaUCtBOUJsa3lHTUptZklZeFJRY2RBcWZISmlEamNkOUN0UDJFK0xUOWowbHVPRDFBUVFFQkEKZXU1ZDFYQk9ZeExYb0N3bGJjNTN5d3ppMTkxZE5jaTQ4YzArVTBrQ2dZRUF5eXdIYkcxOUd4dlNxZXNON2JCQgp6Yy8zYm1qczJocjMvYUxoQWUzMDZUbEdRUkg3Y3lCYzFpR2ZONTF0UTVqV01ZRHg1UndBMmNhUEwwcnl6REhmCjg0SktJeW1pVDB2ckFYRFN2bEorK21BVG1BQnNLcGpSVnpKWTJlVHFqNGQ1NUgyOUdudTVPVUtpMDY1Y2c5WEUKbVIrQ1o5Y1FqN212MDNVbW45MjVJWjQ9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K +kind: Secret +metadata: + creationTimestamp: null + name: client-certs +type: kubernetes.io/tls diff --git a/tests/e2e-instrumentation/instrumentation-java-tls/generate-certs.sh b/tests/e2e-instrumentation/instrumentation-java-tls/generate-certs.sh new file mode 100755 index 0000000000..d4070b03c9 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-java-tls/generate-certs.sh @@ -0,0 +1,14 @@ +#!/usr/bin/env bash + +set -ex + +# CA key and cert +openssl req -new -nodes -x509 -days 9650 -keyout ca.key -out ca.crt -subj "/C=US/ST=California/L=Mountain View/O=Your Organization/OU=Your Unit/CN=localhost" +# Server, E.g. use NDS:*.default.svc.cluster.local for arbitrary collector name deployed in the default namespace +openssl req -new -nodes -x509 -CA ca.crt -CAkey ca.key -days 9650 -set_serial 01 -keyout server.key -out server.crt -subj "/C=US/ST=California/L=Mountain View/O=Your Organization/OU=Your Unit/CN=svc.cluster.local/CN=localhost" -addext "subjectAltName = DNS:simplest-collector,DNS:localhost" +# Client +openssl req -new -nodes -x509 -CA ca.crt -CAkey ca.key -days 9650 -set_serial 01 -keyout client.key -out client.crt -subj "/C=US/ST=California/L=Mountain View/O=Your Organization/OU=Your Unit/CN=svc.cluster.local/CN=localhost" + +kubectl create configmap ca --from-file=ca.crt=ca.crt -o yaml --dry-run=client > ca.yaml +kubectl create secret tls server-certs --cert=server.crt --key=server.key -o yaml --dry-run=client > server-secret.yaml +kubectl create secret tls client-certs --cert=client.crt --key=client.key -o yaml --dry-run=client > client-secret.yaml diff --git a/tests/e2e-instrumentation/instrumentation-java-tls/server-secret.yaml b/tests/e2e-instrumentation/instrumentation-java-tls/server-secret.yaml new file mode 100644 index 0000000000..63afbd2286 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-java-tls/server-secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +data: + tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVPVENDQXlHZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREIrTVFzd0NRWURWUVFHRXdKVlV6RVQKTUJFR0ExVUVDQXdLUTJGc2FXWnZjbTVwWVRFV01CUUdBMVVFQnd3TlRXOTFiblJoYVc0Z1ZtbGxkekVhTUJnRwpBMVVFQ2d3UldXOTFjaUJQY21kaGJtbDZZWFJwYjI0eEVqQVFCZ05WQkFzTUNWbHZkWElnVlc1cGRERVNNQkFHCkExVUVBd3dKYkc5allXeG9iM04wTUNBWERUSTBNVEF4TURFeU5EQXhNVm9ZRHpJd05URXdNekV6TVRJME1ERXgKV2pDQm1qRUxNQWtHQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdNQ2tOaGJHbG1iM0p1YVdFeEZqQVVCZ05WQkFjTQpEVTF2ZFc1MFlXbHVJRlpwWlhjeEdqQVlCZ05WQkFvTUVWbHZkWElnVDNKbllXNXBlbUYwYVc5dU1SSXdFQVlEClZRUUxEQWxaYjNWeUlGVnVhWFF4R2pBWUJnTlZCQU1NRVhOMll5NWpiSFZ6ZEdWeUxteHZZMkZzTVJJd0VBWUQKVlFRRERBbHNiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDcgpGN0U5SmtScldZYmM1Wkh2VU5GM2NZNVFaL0FrWktiN1Exend3aGFhQmtqVldWUEJCeUFmakNzM3laU3RGN2RkClo1VUxKNzN1VmtSdEJuTFd5bUJqbTZMN2xHVms2c0VqSXdFQkFUUDVJSksxdjBONFliSExYM1RJMDZXdlNCM2gKbGpjRzZOY0d0djE3NGFnbU1xZ05Bc0lmYlhEcW0xWkZSQzhCa0IwK2Jhc1hLUzJJS3VuSlIweEl1eTlFL09IWgp4b3hLUFFyQjNMcFBJNDAzSThPR2h1alZiY2xvVzh3UEljaVFwOGdJNVU3UWRJWmwzVkszcTZFWkNTVVhiMGNMClEyNnQ3TDZiRmlWcmhrTm5DZGtOU1Fybm55V2p4cldTdmdLTVZWaUtOVU1XQ0pGRWk1MytlSllxZkkzRTBhYXAKNlNsQ3NUQlE2akJYd0Y5R01mRURBZ01CQUFHamdhSXdnWjh3SFFZRFZSME9CQllFRkhZMjV2bmRHZGdDSC9zUgpMREx3Wmc4Ry9reXpNQjhHQTFVZEl3UVlNQmFBRkk3Zm9EUmFCejc4OEFKSmNBbzB3QzQyMkxEVU1BOEdBMVVkCkV3RUIvd1FGTUFNQkFmOHdUQVlEVlIwUkJFVXdRNElTYzJsdGNHeGxjM1F0WTI5c2JHVmpkRzl5Z2lJcUxuUnkKWVdOcGJtY3RjM2x6ZEdWdExuTjJZeTVqYkhWemRHVnlMbXh2WTJGc2dnbHNiMk5oYkdodmMzUXdEUVlKS29aSQpodmNOQVFFTEJRQURnZ0VCQURVNWtCUnRHaTlYalh5TUZJWEdKYTZtNEJrSTFOK0ZqUUprbGVSbWFOaWljRzhwCngzcnNBLzVVZU8wR0pDOXIrMWpxYThtVjhtUzcrMVVNUWJwMTZFUlZOVDBHWnA1TXNFdERqUHBveWFtM1JOZ1YKT2QwMnhZUUNNK3NGMVdNWll3M05FMEszTkJRcncyY3hoKzg2U29GdDBMdDEyYlBmbkxGTzJSdmI3b09aaHB0aApLNjJxaUlOVXAveG85S3hEelVhT1R1SndsTFdmTWQzMS80NFNyZm1hVWpGMWhKRDJ2SHZGeHZhcVNvSmJNK3pkCm1lbGo4cFhkS1ZJaWFRQ3hQZ1E4bTRHUmhQWkFuTk1pVlhiVm9vR05pelFaeEt2UWVwWnpXRTRXNmNFZTJtTWoKYlYvNVBOc3l5Mk0wOWY0MjlvTGxtNEpjb3IyMWZFSGVNVG1PYytzPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ3JGN0U5SmtScldZYmMKNVpIdlVORjNjWTVRWi9Ba1pLYjdRMXp3d2hhYUJralZXVlBCQnlBZmpDczN5WlN0RjdkZFo1VUxKNzN1VmtSdApCbkxXeW1Cam02TDdsR1ZrNnNFakl3RUJBVFA1SUpLMXYwTjRZYkhMWDNUSTA2V3ZTQjNobGpjRzZOY0d0djE3CjRhZ21NcWdOQXNJZmJYRHFtMVpGUkM4QmtCMCtiYXNYS1MySUt1bkpSMHhJdXk5RS9PSFp4b3hLUFFyQjNMcFAKSTQwM0k4T0dodWpWYmNsb1c4d1BJY2lRcDhnSTVVN1FkSVpsM1ZLM3E2RVpDU1VYYjBjTFEyNnQ3TDZiRmlWcgpoa05uQ2RrTlNRcm5ueVdqeHJXU3ZnS01WVmlLTlVNV0NKRkVpNTMrZUpZcWZJM0UwYWFwNlNsQ3NUQlE2akJYCndGOUdNZkVEQWdNQkFBRUNnZ0VBQWZ0d3lhQk10SVFoZzVYQzIrZUxvVVhvc1QxejYvVlo1d1BodVNsMW1laXIKU1QwTUZxT1Z4YjdvQ21MUFVsSFdNQkRtTlB5WjFGVUJhMkticjNEUUJCSVV0QWxhWDVqaU9TdDlYeGxVb3NNegp5cGg2MkZxeXUrZTd0Z2UwWGZ0elRMc05hRVBPNEp4dFdOVWgxcVJYdG1yZ1grQzU0aDdWSjFGSkU2L1dJblJJClF1ajEydytCTXVoUUcyRVM0cHpQQ2RtUGgvTkdSV1Q1K0NuamJzMkdjS1EvNXZIaXNnZVAyWEU2ZWRmbldCcFAKN0xNYjFCUGdLU09FTXZ2eUI2R2lMU2xwYWpKTmRtUHpmQmZURllWT05RTk5RYWJhRVFoNG9Gd2s1SkJKTlB5dQo1bCt2UlRpTEtZdk45YzB6Q1ZLa2NuZ0VyaXc0cTVCT3pPa3l1bGlNQVFLQmdRRHNzNkw2UVppNTRTOUh2ckRLCmxSMms4WlRoR1NSZ25uQ1Q2ZHVBcG50VEJpMWs3RUdGWFlWWGtIcEtpZEV5NFNKUmc4bXlSeTNjd1J3S2UzNVgKQVU5aURrajNjN3JLa041RmNleURqTzhXRVptdzZsb213aUtaczk0Y3FHY2tEWnEzSFNROStiMmNjZXFvc3QzSApOWE9ZUHBZdUZKUDAraFFEVFErSytXZUZBd0tCZ1FDNUNxNmJrSEFmQlp3OG5LeG5od3drV21JOTFjV3Fya21VCm5vamRPUkRxV1I1aWZDS05uQi9La0R4RHhGMDNRTjF5alpqcGZFOXdZRm42Vys4ZitPZXNBSGhaODRyZmhUTUgKTTd0VXZoNnNzcSt4QjhkWkJIeTRMd0prQTBPSFBqV2ZtaU96WllwczloUW5WRzAyRDBkdWZlWUNOd0hYN2hTRgptckp6RnJJa0FRS0JnUUNsclZiMk05UGl4MnVBbkVqQ2czMHNacXYrb3NxRGxtTFdKV291c2xpLzFDTVI4UXdyCmZUcElBQ2lZNDc0NkRyc21zMGdLTVNnNHpESUVaRXdhT2lDR1VkbGcydkJ6dU5MYmFOSlRnZUlYWUZwaktxWFAKV3pNOHdsbEZWZHBic2VvSklheXNkSkh6WHdrUTY2R3dQZ21iRnJPbnJWK2lxU2c0NTBkcHp3aFdZUUtCZ0JKNQpWWXRrZFQwem96Q045OHh5T0MwYzlQZjFjc0dpbXVnQ2wrbDJQQkVaaXFZTWZLcWtycXZia0ppM2J4TUlIOVBDCi9VUTZTL2dOTm81L1JUVnM5VHcvNDhRZlEzc2pZai9TMDE0WGlScDIwSUdkSkRMbjlzZXdzYzFvWWdLTG5IRHQKdzZpeWQ0cC9XdTIrU1JUL200TVZnTFF4NTdZMko4aGE5SHYzQlJ3QkFvR0FVMkNXRkJUWk1HL0NCZDhiOG5VeQpHblltNnhIMHpkeWdzK3JWakl5aEhBRXBTODZCaTZGSGh6TkNYRDFjSCtJWjAvYzR0QmYwSjhsVVRvOXpCV1YrClBJNjJOaEVMOW45WjMydTJGNzRDS0VTUi9EdUhTNXBTT2M3eXgvU3BOMjBpUkpodFl5aHA2YU9HMXQwL2NzRHAKVFpvOHlEdzl5cmVTdU9VSUFmM3JraG89Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K +kind: Secret +metadata: + creationTimestamp: null + name: server-certs +type: kubernetes.io/tls diff --git a/tests/e2e-instrumentation/instrumentation-nginx-contnr-secctx/01-install-app.yaml b/tests/e2e-instrumentation/instrumentation-nginx-contnr-secctx/01-install-app.yaml index eea887ae21..d1ae0e239f 100644 --- a/tests/e2e-instrumentation/instrumentation-nginx-contnr-secctx/01-install-app.yaml +++ b/tests/e2e-instrumentation/instrumentation-nginx-contnr-secctx/01-install-app.yaml @@ -22,7 +22,6 @@ spec: securityContext: runAsUser: 1000 runAsGroup: 3000 - fsGroup: 3000 ports: - containerPort: 8765 env: @@ -33,7 +32,6 @@ spec: mountPath: /etc/nginx/nginx.conf subPath: nginx.conf readOnly: true - imagePullPolicy: Always resources: limits: cpu: "1" diff --git a/tests/e2e-instrumentation/instrumentation-nginx-multicontainer/01-install-app.yaml b/tests/e2e-instrumentation/instrumentation-nginx-multicontainer/01-install-app.yaml index 523a44efcf..3a50ecd54d 100644 --- a/tests/e2e-instrumentation/instrumentation-nginx-multicontainer/01-install-app.yaml +++ b/tests/e2e-instrumentation/instrumentation-nginx-multicontainer/01-install-app.yaml @@ -28,7 +28,6 @@ spec: securityContext: runAsUser: 1000 runAsGroup: 3000 - fsGroup: 3000 runAsNonRoot: true allowPrivilegeEscalation: false seccompProfile: @@ -57,7 +56,6 @@ spec: securityContext: runAsUser: 1000 runAsGroup: 3000 - fsGroup: 3000 runAsNonRoot: true seccompProfile: type: RuntimeDefault diff --git a/tests/e2e-instrumentation/instrumentation-nginx-multicontainer/02-install-app.yaml b/tests/e2e-instrumentation/instrumentation-nginx-multicontainer/02-install-app.yaml index ab80a2db5a..0f2b3a828b 100644 --- a/tests/e2e-instrumentation/instrumentation-nginx-multicontainer/02-install-app.yaml +++ b/tests/e2e-instrumentation/instrumentation-nginx-multicontainer/02-install-app.yaml @@ -28,7 +28,6 @@ spec: securityContext: runAsUser: 1000 runAsGroup: 3000 - fsGroup: 3000 runAsNonRoot: true allowPrivilegeEscalation: false seccompProfile: @@ -45,7 +44,6 @@ spec: mountPath: /etc/nginx/nginx.conf subPath: nginx.conf readOnly: true - imagePullPolicy: Always resources: limits: cpu: 500m @@ -58,7 +56,6 @@ spec: securityContext: runAsUser: 1000 runAsGroup: 3000 - fsGroup: 3000 runAsNonRoot: true seccompProfile: type: RuntimeDefault diff --git a/tests/e2e-instrumentation/instrumentation-nodejs-volume/00-install-collector.yaml b/tests/e2e-instrumentation/instrumentation-nodejs-volume/00-install-collector.yaml new file mode 100644 index 0000000000..34a26ebb2c --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-nodejs-volume/00-install-collector.yaml @@ -0,0 +1,22 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: sidecar +spec: + config: | + receivers: + otlp: + protocols: + grpc: + http: + processors: + + exporters: + debug: + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + mode: sidecar diff --git a/tests/e2e-instrumentation/instrumentation-nodejs-volume/00-install-instrumentation.yaml b/tests/e2e-instrumentation/instrumentation-nodejs-volume/00-install-instrumentation.yaml new file mode 100644 index 0000000000..06c5c8dd03 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-nodejs-volume/00-install-instrumentation.yaml @@ -0,0 +1,38 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: Instrumentation +metadata: + name: nodejs +spec: + env: + - name: OTEL_TRACES_EXPORTER + value: otlp + - name: OTEL_EXPORTER_OTLP_ENDPOINT + value: http://localhost:4317 + - name: OTEL_EXPORTER_OTLP_TIMEOUT + value: "20" + - name: OTEL_TRACES_SAMPLER + value: parentbased_traceidratio + - name: OTEL_TRACES_SAMPLER_ARG + value: "0.85" + - name: SPLUNK_TRACE_RESPONSE_HEADER_ENABLED + value: "true" + - name: OTEL_METRICS_EXPORTER + value: prometheus + exporter: + endpoint: http://localhost:4317 + propagators: + - jaeger + - b3 + sampler: + type: parentbased_traceidratio + argument: "0.25" + nodejs: + env: + - name: OTEL_NODEJS_DEBUG + value: "true" + volumeClaimTemplate: + spec: + accessModes: ["ReadWriteOnce"] + resources: + requests: + storage: 1Gi diff --git a/tests/e2e-instrumentation/instrumentation-nodejs-volume/01-assert.yaml b/tests/e2e-instrumentation/instrumentation-nodejs-volume/01-assert.yaml new file mode 100644 index 0000000000..83e32efc3a --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-nodejs-volume/01-assert.yaml @@ -0,0 +1,84 @@ +apiVersion: v1 +kind: Pod +metadata: + annotations: + instrumentation.opentelemetry.io/inject-nodejs: "true" + sidecar.opentelemetry.io/inject: "true" + labels: + app: my-nodejs +spec: + containers: + - env: + - name: OTEL_NODE_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + - name: OTEL_POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: NODE_PATH + value: /usr/local/lib/node_modules + - name: OTEL_NODEJS_DEBUG + value: "true" + - name: NODE_OPTIONS + value: " --require /otel-auto-instrumentation-nodejs/autoinstrumentation.js" + - name: OTEL_TRACES_EXPORTER + value: otlp + - name: OTEL_EXPORTER_OTLP_ENDPOINT + value: http://localhost:4317 + - name: OTEL_EXPORTER_OTLP_TIMEOUT + value: "20" + - name: OTEL_TRACES_SAMPLER + value: parentbased_traceidratio + - name: OTEL_TRACES_SAMPLER_ARG + value: "0.85" + - name: SPLUNK_TRACE_RESPONSE_HEADER_ENABLED + value: "true" + - name: OTEL_METRICS_EXPORTER + value: prometheus + - name: OTEL_SERVICE_NAME + value: my-nodejs + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_PROPAGATORS + value: jaeger,b3 + - name: OTEL_RESOURCE_ATTRIBUTES + name: myapp + volumeMounts: + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - args: + - --config=env:OTEL_CONFIG + name: otc-container + initContainers: + - name: opentelemetry-auto-instrumentation-nodejs + volumes: + - name: opentelemetry-auto-instrumentation-nodejs + ephemeral: + volumeClaimTemplate: + spec: + accessModes: ["ReadWriteOnce"] + resources: + requests: + storage: 1Gi +status: + containerStatuses: + - name: myapp + ready: true + started: true + - name: otc-container + ready: true + started: true + initContainerStatuses: + - name: opentelemetry-auto-instrumentation-nodejs + ready: true + phase: Running diff --git a/tests/e2e-instrumentation/instrumentation-nodejs-volume/01-install-app.yaml b/tests/e2e-instrumentation/instrumentation-nodejs-volume/01-install-app.yaml new file mode 100644 index 0000000000..b219006dfc --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-nodejs-volume/01-install-app.yaml @@ -0,0 +1,32 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nodejs +spec: + selector: + matchLabels: + app: my-nodejs + replicas: 1 + template: + metadata: + labels: + app: my-nodejs + annotations: + sidecar.opentelemetry.io/inject: "true" + instrumentation.opentelemetry.io/inject-nodejs: "true" + spec: + securityContext: + runAsUser: 1000 + runAsGroup: 3000 + fsGroup: 3000 + containers: + - name: myapp + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs:main + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] + env: + - name: NODE_PATH + value: /usr/local/lib/node_modules + automountServiceAccountToken: false diff --git a/tests/e2e-instrumentation/instrumentation-nodejs-volume/chainsaw-test.yaml b/tests/e2e-instrumentation/instrumentation-nodejs-volume/chainsaw-test.yaml new file mode 100755 index 0000000000..7156d05d37 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-nodejs-volume/chainsaw-test.yaml @@ -0,0 +1,40 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + creationTimestamp: null + name: instrumentation-nodejs-volume +spec: + steps: + - name: step-00 + try: + # In OpenShift, when a namespace is created, all necessary SCC annotations are automatically added. However, if a namespace is created using a resource file with only selected SCCs, the other auto-added SCCs are not included. Therefore, the UID-range and supplemental groups SCC annotations must be set after the namespace is created. + - command: + entrypoint: kubectl + args: + - annotate + - namespace + - ${NAMESPACE} + - openshift.io/sa.scc.uid-range=1000/1000 + - --overwrite + - command: + entrypoint: kubectl + args: + - annotate + - namespace + - ${NAMESPACE} + - openshift.io/sa.scc.supplemental-groups=3000/3000 + - --overwrite + - apply: + file: 00-install-collector.yaml + - apply: + file: 00-install-instrumentation.yaml + - name: step-01 + try: + - apply: + file: 01-install-app.yaml + - assert: + file: 01-assert.yaml + catch: + - podLogs: + selector: app=my-nodejs diff --git a/tests/e2e-instrumentation/instrumentation-python-multicontainer/01-assert.yaml b/tests/e2e-instrumentation/instrumentation-python-multicontainer/01-assert.yaml index d0e8c12567..72f6e7e712 100644 --- a/tests/e2e-instrumentation/instrumentation-python-multicontainer/01-assert.yaml +++ b/tests/e2e-instrumentation/instrumentation-python-multicontainer/01-assert.yaml @@ -26,6 +26,8 @@ spec: value: otlp - name: OTEL_METRICS_EXPORTER value: otlp + - name: OTEL_LOGS_EXPORTER + value: otlp - name: OTEL_EXPORTER_OTLP_ENDPOINT value: http://localhost:4317 - name: OTEL_EXPORTER_OTLP_TIMEOUT @@ -74,6 +76,8 @@ spec: value: otlp - name: OTEL_METRICS_EXPORTER value: otlp + - name: OTEL_LOGS_EXPORTER + value: otlp - name: OTEL_EXPORTER_OTLP_ENDPOINT value: http://localhost:4317 - name: OTEL_EXPORTER_OTLP_TIMEOUT diff --git a/tests/e2e-instrumentation/instrumentation-python-multicontainer/02-assert.yaml b/tests/e2e-instrumentation/instrumentation-python-multicontainer/02-assert.yaml index da7c987e54..5e2cbf06e2 100644 --- a/tests/e2e-instrumentation/instrumentation-python-multicontainer/02-assert.yaml +++ b/tests/e2e-instrumentation/instrumentation-python-multicontainer/02-assert.yaml @@ -37,6 +37,8 @@ spec: value: otlp - name: OTEL_METRICS_EXPORTER value: otlp + - name: OTEL_LOGS_EXPORTER + value: otlp - name: OTEL_EXPORTER_OTLP_ENDPOINT value: http://localhost:4317 - name: OTEL_EXPORTER_OTLP_TIMEOUT diff --git a/tests/e2e-instrumentation/instrumentation-python-musl/00-install-collector.yaml b/tests/e2e-instrumentation/instrumentation-python-musl/00-install-collector.yaml new file mode 100644 index 0000000000..34a26ebb2c --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-python-musl/00-install-collector.yaml @@ -0,0 +1,22 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: sidecar +spec: + config: | + receivers: + otlp: + protocols: + grpc: + http: + processors: + + exporters: + debug: + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + mode: sidecar diff --git a/tests/e2e-instrumentation/instrumentation-python-musl/00-install-instrumentation.yaml b/tests/e2e-instrumentation/instrumentation-python-musl/00-install-instrumentation.yaml new file mode 100644 index 0000000000..987cddaca6 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-python-musl/00-install-instrumentation.yaml @@ -0,0 +1,30 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: Instrumentation +metadata: + name: python-musl +spec: + env: + - name: OTEL_EXPORTER_OTLP_TIMEOUT + value: "20" + - name: OTEL_TRACES_SAMPLER + value: parentbased_traceidratio + - name: OTEL_TRACES_SAMPLER_ARG + value: "0.85" + - name: SPLUNK_TRACE_RESPONSE_HEADER_ENABLED + value: "true" + exporter: + endpoint: http://localhost:4317 + propagators: + - jaeger + - b3 + sampler: + type: parentbased_traceidratio + argument: "0.25" + python: + env: + - name: OTEL_LOG_LEVEL + value: "debug" + - name: OTEL_TRACES_EXPORTER + value: otlp + - name: OTEL_EXPORTER_OTLP_ENDPOINT + value: http://localhost:4318 diff --git a/tests/e2e-instrumentation/instrumentation-python-musl/01-assert.yaml b/tests/e2e-instrumentation/instrumentation-python-musl/01-assert.yaml new file mode 100644 index 0000000000..2485a7e6d7 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-python-musl/01-assert.yaml @@ -0,0 +1,85 @@ +apiVersion: v1 +kind: Pod +metadata: + annotations: + instrumentation.opentelemetry.io/inject-python: "true" + sidecar.opentelemetry.io/inject: "true" + instrumentation.opentelemetry.io/otel-python-platform: "musl" + labels: + app: my-python-musl +spec: + containers: + - env: + - name: OTEL_NODE_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + - name: OTEL_POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: OTEL_LOG_LEVEL + value: debug + - name: OTEL_TRACES_EXPORTER + value: otlp + - name: OTEL_EXPORTER_OTLP_ENDPOINT + value: http://localhost:4318 + - name: PYTHONPATH + value: /otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:/otel-auto-instrumentation-python + - name: OTEL_EXPORTER_OTLP_PROTOCOL + value: http/protobuf + - name: OTEL_METRICS_EXPORTER + value: otlp + - name: OTEL_LOGS_EXPORTER + value: otlp + - name: OTEL_EXPORTER_OTLP_TIMEOUT + value: "20" + - name: OTEL_TRACES_SAMPLER + value: parentbased_traceidratio + - name: OTEL_TRACES_SAMPLER_ARG + value: "0.85" + - name: SPLUNK_TRACE_RESPONSE_HEADER_ENABLED + value: "true" + - name: OTEL_SERVICE_NAME + value: my-python-musl + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_PROPAGATORS + value: jaeger,b3 + - name: OTEL_RESOURCE_ATTRIBUTES + name: myapp + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + readOnly: true + - mountPath: /otel-auto-instrumentation-python + name: opentelemetry-auto-instrumentation-python + - args: + - --config=env:OTEL_CONFIG + name: otc-container + initContainers: + - name: opentelemetry-auto-instrumentation-python + command: + - cp + - -r + - /autoinstrumentation-musl/. + - /otel-auto-instrumentation-python +status: + containerStatuses: + - name: myapp + ready: true + started: true + - name: otc-container + ready: true + started: true + initContainerStatuses: + - name: opentelemetry-auto-instrumentation-python + ready: true + phase: Running diff --git a/tests/e2e-instrumentation/instrumentation-python-musl/01-install-app.yaml b/tests/e2e-instrumentation/instrumentation-python-musl/01-install-app.yaml new file mode 100644 index 0000000000..3dbca9a62f --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-python-musl/01-install-app.yaml @@ -0,0 +1,29 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-python-musl +spec: + selector: + matchLabels: + app: my-python-musl + replicas: 1 + template: + metadata: + labels: + app: my-python-musl + annotations: + sidecar.opentelemetry.io/inject: "true" + instrumentation.opentelemetry.io/inject-python: "true" + instrumentation.opentelemetry.io/otel-python-platform: "musl" + spec: + securityContext: + runAsUser: 1000 + runAsGroup: 3000 + fsGroup: 3000 + containers: + - name: myapp + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] diff --git a/tests/e2e-instrumentation/instrumentation-python-musl/chainsaw-test.yaml b/tests/e2e-instrumentation/instrumentation-python-musl/chainsaw-test.yaml new file mode 100755 index 0000000000..89799f2367 --- /dev/null +++ b/tests/e2e-instrumentation/instrumentation-python-musl/chainsaw-test.yaml @@ -0,0 +1,40 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + creationTimestamp: null + name: instrumentation-python-musl +spec: + steps: + - name: step-00 + try: + # In OpenShift, when a namespace is created, all necessary SCC annotations are automatically added. However, if a namespace is created using a resource file with only selected SCCs, the other auto-added SCCs are not included. Therefore, the UID-range and supplemental groups SCC annotations must be set after the namespace is created. + - command: + entrypoint: kubectl + args: + - annotate + - namespace + - ${NAMESPACE} + - openshift.io/sa.scc.uid-range=1000/1000 + - --overwrite + - command: + entrypoint: kubectl + args: + - annotate + - namespace + - ${NAMESPACE} + - openshift.io/sa.scc.supplemental-groups=3000/3000 + - --overwrite + - apply: + file: 00-install-collector.yaml + - apply: + file: 00-install-instrumentation.yaml + - name: step-01 + try: + - apply: + file: 01-install-app.yaml + - assert: + file: 01-assert.yaml + catch: + - podLogs: + selector: app=my-python-musl diff --git a/tests/e2e-instrumentation/instrumentation-python/01-assert.yaml b/tests/e2e-instrumentation/instrumentation-python/01-assert.yaml index 62cf682ba0..94ff9058d7 100644 --- a/tests/e2e-instrumentation/instrumentation-python/01-assert.yaml +++ b/tests/e2e-instrumentation/instrumentation-python/01-assert.yaml @@ -29,6 +29,8 @@ spec: value: http/protobuf - name: OTEL_METRICS_EXPORTER value: otlp + - name: OTEL_LOGS_EXPORTER + value: otlp - name: OTEL_EXPORTER_OTLP_TIMEOUT value: "20" - name: OTEL_TRACES_SAMPLER diff --git a/tests/e2e-multi-instrumentation/instrumentation-multi-multicontainer-go/02-assert.yaml b/tests/e2e-multi-instrumentation/instrumentation-multi-multicontainer-go/02-assert.yaml index 71d4e05d06..6fd04f5d65 100644 --- a/tests/e2e-multi-instrumentation/instrumentation-multi-multicontainer-go/02-assert.yaml +++ b/tests/e2e-multi-instrumentation/instrumentation-multi-multicontainer-go/02-assert.yaml @@ -43,6 +43,8 @@ spec: value: otlp - name: OTEL_METRICS_EXPORTER value: otlp + - name: OTEL_LOGS_EXPORTER + value: otlp - name: OTEL_EXPORTER_OTLP_TIMEOUT value: "20" - name: OTEL_TRACES_SAMPLER @@ -90,6 +92,9 @@ spec: exporters: debug: null service: + telemetry: + metrics: + address: 0.0.0.0:8888 pipelines: traces: exporters: diff --git a/tests/e2e-multi-instrumentation/instrumentation-multi-multicontainer/01-assert.yaml b/tests/e2e-multi-instrumentation/instrumentation-multi-multicontainer/01-assert.yaml index 61eabd4e91..3ba921ada1 100644 --- a/tests/e2e-multi-instrumentation/instrumentation-multi-multicontainer/01-assert.yaml +++ b/tests/e2e-multi-instrumentation/instrumentation-multi-multicontainer/01-assert.yaml @@ -185,6 +185,8 @@ spec: value: otlp - name: OTEL_METRICS_EXPORTER value: otlp + - name: OTEL_LOGS_EXPORTER + value: otlp - name: OTEL_TRACES_SAMPLER value: parentbased_traceidratio - name: OTEL_TRACES_SAMPLER_ARG @@ -329,6 +331,9 @@ spec: exporters: debug: null service: + telemetry: + metrics: + address: 0.0.0.0:8888 pipelines: traces: exporters: diff --git a/tests/e2e-native-sidecar/00-assert.yaml b/tests/e2e-native-sidecar/00-assert.yaml new file mode 100644 index 0000000000..823eaf2ad1 --- /dev/null +++ b/tests/e2e-native-sidecar/00-assert.yaml @@ -0,0 +1,22 @@ +--- +apiVersion: v1 +kind: Pod +metadata: + annotations: + sidecar.opentelemetry.io/inject: "true" + name: myapp +spec: + containers: + - name: myapp + initContainers: + - name: otc-container + restartPolicy: Always +status: + containerStatuses: + - name: myapp + ready: true + started: true + initContainerStatuses: + - name: otc-container + ready: true + started: true diff --git a/tests/e2e-native-sidecar/00-install.yaml b/tests/e2e-native-sidecar/00-install.yaml new file mode 100644 index 0000000000..82c54ffdf8 --- /dev/null +++ b/tests/e2e-native-sidecar/00-install.yaml @@ -0,0 +1,41 @@ +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: a-sidecar +spec: + mode: sidecar + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 5m + memory: 64Mi + + config: + receivers: + otlp: + protocols: + http: {} + exporters: + debug: {} + service: + pipelines: + metrics: + receivers: [otlp] + exporters: [debug] +--- +apiVersion: v1 +kind: Pod +metadata: + name: myapp + annotations: + sidecar.opentelemetry.io/inject: "true" +spec: + containers: + - name: myapp + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + ports: + - containerPort: 8080 + protocol: TCP diff --git a/tests/e2e-native-sidecar/chainsaw-test.yaml b/tests/e2e-native-sidecar/chainsaw-test.yaml new file mode 100755 index 0000000000..0d93db6d15 --- /dev/null +++ b/tests/e2e-native-sidecar/chainsaw-test.yaml @@ -0,0 +1,14 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + creationTimestamp: null + name: native-sidecar +spec: + steps: + - name: step-00 + try: + - apply: + file: 00-install.yaml + - assert: + file: 00-assert.yaml diff --git a/tests/e2e-openshift/export-to-cluster-logging-lokistack/chainsaw-test.yaml b/tests/e2e-openshift/export-to-cluster-logging-lokistack/chainsaw-test.yaml index 15b018f8e2..2678477a1f 100644 --- a/tests/e2e-openshift/export-to-cluster-logging-lokistack/chainsaw-test.yaml +++ b/tests/e2e-openshift/export-to-cluster-logging-lokistack/chainsaw-test.yaml @@ -20,6 +20,16 @@ spec: file: install-minio-assert.yaml - name: Create the LokiStack instance try: + - command: + entrypoint: oc + args: + - get + - storageclass + - -o + - jsonpath={.items[0].metadata.name} + outputs: + - name: STORAGE_CLASS_NAME + value: ($stdout) - apply: file: install-loki.yaml - assert: diff --git a/tests/e2e-openshift/export-to-cluster-logging-lokistack/install-loki.yaml b/tests/e2e-openshift/export-to-cluster-logging-lokistack/install-loki.yaml index 30c5b560e3..e63aa73982 100644 --- a/tests/e2e-openshift/export-to-cluster-logging-lokistack/install-loki.yaml +++ b/tests/e2e-openshift/export-to-cluster-logging-lokistack/install-loki.yaml @@ -12,6 +12,6 @@ spec: secret: name: logging-loki-s3 type: s3 - storageClassName: gp2-csi + storageClassName: ($STORAGE_CLASS_NAME) tenants: mode: openshift-logging diff --git a/tests/e2e-openshift/monitoring/03-assert.yaml b/tests/e2e-openshift/monitoring/03-assert.yaml index 508687915c..813b944fac 100644 --- a/tests/e2e-openshift/monitoring/03-assert.yaml +++ b/tests/e2e-openshift/monitoring/03-assert.yaml @@ -11,6 +11,7 @@ rules: - get - list - watch + - create --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding diff --git a/tests/e2e-openshift/monitoring/03-create-monitoring-roles.yaml b/tests/e2e-openshift/monitoring/03-create-monitoring-roles.yaml index 23fd47841f..dd239ec224 100644 --- a/tests/e2e-openshift/monitoring/03-create-monitoring-roles.yaml +++ b/tests/e2e-openshift/monitoring/03-create-monitoring-roles.yaml @@ -6,7 +6,7 @@ metadata: rules: - apiGroups: ["monitoring.coreos.com"] resources: ["prometheuses/api"] - verbs: ["get", "list", "watch"] + verbs: ["get", "list", "watch", "create"] --- apiVersion: rbac.authorization.k8s.io/v1 diff --git a/tests/e2e-openshift/monitoring/chainsaw-test.yaml b/tests/e2e-openshift/monitoring/chainsaw-test.yaml index 0cf36e93f0..4752e8ccb3 100755 --- a/tests/e2e-openshift/monitoring/chainsaw-test.yaml +++ b/tests/e2e-openshift/monitoring/chainsaw-test.yaml @@ -1,4 +1,3 @@ -# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json apiVersion: chainsaw.kyverno.io/v1alpha1 kind: Test metadata: @@ -14,6 +13,31 @@ spec: file: 00-workload-monitoring.yaml - assert: file: 00-assert.yaml + - name: Enable OpenShift platform monitoring on the OpenTelemetry operator namespace + try: + - command: + entrypoint: oc + args: + - get + - pods + - -A + - -l control-plane=controller-manager + - -l app.kubernetes.io/name=opentelemetry-operator + - -o + - jsonpath={.items[0].metadata.namespace} + outputs: + - name: OTEL_NAMESPACE + value: ($stdout) + - command: + env: + - name: otelnamespace + value: ($OTEL_NAMESPACE) + entrypoint: oc + args: + - label + - namespace + - $otelnamespace + - openshift.io/cluster-monitoring=true - name: step-01 try: - apply: diff --git a/tests/e2e-openshift/monitoring/check_metrics.sh b/tests/e2e-openshift/monitoring/check_metrics.sh index e92a1649e4..ad8843ae38 100755 --- a/tests/e2e-openshift/monitoring/check_metrics.sh +++ b/tests/e2e-openshift/monitoring/check_metrics.sh @@ -3,23 +3,23 @@ TOKEN=$(oc create token prometheus-user-workload -n openshift-user-workload-monitoring) THANOS_QUERIER_HOST=$(oc get route thanos-querier -n openshift-monitoring -o json | jq -r '.spec.host') -#Check metrics for OpenTelemetry collector instance. -metrics="otelcol_process_uptime otelcol_process_runtime_total_sys_memory_bytes otelcol_process_memory_rss otelcol_exporter_sent_spans otelcol_process_cpu_seconds otelcol_process_memory_rss otelcol_process_runtime_heap_alloc_bytes otelcol_process_runtime_total_alloc_bytes otelcol_process_runtime_total_sys_memory_bytes otelcol_process_uptime otelcol_receiver_accepted_spans otelcol_receiver_refused_spans" +# Check metrics for OpenTelemetry collector instance. +metrics="otelcol_process_uptime otelcol_process_runtime_total_sys_memory_bytes otelcol_process_memory_rss otelcol_exporter_sent_spans otelcol_process_cpu_seconds otelcol_process_memory_rss otelcol_process_runtime_heap_alloc_bytes otelcol_process_runtime_total_alloc_bytes otelcol_process_runtime_total_sys_memory_bytes otelcol_process_uptime otelcol_receiver_accepted_spans otelcol_receiver_refused_spans controller_runtime_reconcile_time_seconds_count{controller=\"opentelemetrycollector\"} controller_runtime_reconcile_total{controller=\"opentelemetrycollector\",result=\"success\"} workqueue_work_duration_seconds_count{controller=\"opentelemetrycollector\",name=\"opentelemetrycollector\"}" for metric in $metrics; do -query="$metric" -count=0 + query="$metric" + count=0 -# Keep fetching and checking the metrics until metrics with value is present. -while [[ $count -eq 0 ]]; do - response=$(curl -k -H "Authorization: Bearer $TOKEN" -H "Content-type: application/json" "https://$THANOS_QUERIER_HOST/api/v1/query?query=$query") - count=$(echo "$response" | jq -r '.data.result | length') + # Keep fetching and checking the metrics until metrics with value is present. + while [[ $count -eq 0 ]]; do + response=$(curl -k -H "Authorization: Bearer $TOKEN" --data-urlencode "query=$query" "https://$THANOS_QUERIER_HOST/api/v1/query") + count=$(echo "$response" | jq -r '.data.result | length' | tr -d '\n' | tr -d ' ') - if [[ $count -eq 0 ]]; then - echo "No metric '$metric' with value present. Retrying..." - sleep 5 # Wait for 5 seconds before retrying + if [[ "$count" -eq 0 ]]; then + echo "No metric '$metric' with value present. Retrying..." + sleep 5 # Wait for 5 seconds before retrying else - echo "Metric '$metric' with value is present." + echo "Metric '$metric' with value is present." fi done done diff --git a/tests/e2e-openshift/multi-cluster/04-assert.yaml b/tests/e2e-openshift/multi-cluster/04-assert.yaml index 922508c134..f1a66083cb 100644 --- a/tests/e2e-openshift/multi-cluster/04-assert.yaml +++ b/tests/e2e-openshift/multi-cluster/04-assert.yaml @@ -4,9 +4,7 @@ metadata: name: generate-traces-http namespace: chainsaw-multi-cluster-send status: - conditions: - - status: "True" - type: Complete + succeeded: 1 --- apiVersion: batch/v1 @@ -15,6 +13,4 @@ metadata: name: generate-traces-grpc namespace: chainsaw-multi-cluster-send status: - conditions: - - status: "True" - type: Complete + succeeded: 1 \ No newline at end of file diff --git a/tests/e2e-openshift/must-gather/assert-install-app.yaml b/tests/e2e-openshift/must-gather/assert-install-app.yaml new file mode 100644 index 0000000000..719727ca68 --- /dev/null +++ b/tests/e2e-openshift/must-gather/assert-install-app.yaml @@ -0,0 +1,77 @@ +apiVersion: v1 +kind: Pod +metadata: + annotations: + instrumentation.opentelemetry.io/inject-nodejs: "true" + sidecar.opentelemetry.io/inject: "true" + labels: + app: my-nodejs +spec: + containers: + - env: + - name: OTEL_NODE_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + - name: OTEL_POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: NODE_PATH + value: /usr/local/lib/node_modules + - name: OTEL_NODEJS_DEBUG + value: "true" + - name: NODE_OPTIONS + value: ' --require /otel-auto-instrumentation-nodejs/autoinstrumentation.js' + - name: OTEL_TRACES_EXPORTER + value: otlp + - name: OTEL_EXPORTER_OTLP_ENDPOINT + value: http://localhost:4317 + - name: OTEL_EXPORTER_OTLP_TIMEOUT + value: "20" + - name: OTEL_TRACES_SAMPLER + value: parentbased_traceidratio + - name: OTEL_TRACES_SAMPLER_ARG + value: "0.85" + - name: SPLUNK_TRACE_RESPONSE_HEADER_ENABLED + value: "true" + - name: OTEL_METRICS_EXPORTER + value: prometheus + - name: OTEL_SERVICE_NAME + value: my-nodejs + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_PROPAGATORS + value: jaeger,b3 + - name: OTEL_RESOURCE_ATTRIBUTES + name: myapp + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + readOnly: true + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - args: + - --config=env:OTEL_CONFIG + name: otc-container + initContainers: + - name: opentelemetry-auto-instrumentation-nodejs +status: + containerStatuses: + - name: myapp + ready: true + started: true + - name: otc-container + ready: true + started: true + initContainerStatuses: + - name: opentelemetry-auto-instrumentation-nodejs + ready: true + phase: Running diff --git a/tests/e2e-openshift/must-gather/assert-install-target-allocator.yaml b/tests/e2e-openshift/must-gather/assert-install-target-allocator.yaml new file mode 100644 index 0000000000..b70d638b11 --- /dev/null +++ b/tests/e2e-openshift/must-gather/assert-install-target-allocator.yaml @@ -0,0 +1,93 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: stateful-collector +status: + readyReplicas: 1 + replicas: 1 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: stateful-targetallocator +status: + observedGeneration: 1 + readyReplicas: 1 + replicas: 1 +--- +apiVersion: v1 +data: + collector.yaml: | + exporters: + debug: {} + processors: {} + receivers: + jaeger: + protocols: + grpc: + endpoint: 0.0.0.0:14250 + prometheus: + config: + global: + scrape_interval: 30s + scrape_protocols: + - PrometheusProto + - OpenMetricsText1.0.0 + - OpenMetricsText0.0.1 + - PrometheusText0.0.4 + target_allocator: + collector_id: ${POD_NAME} + endpoint: http://stateful-targetallocator:80 + interval: 30s + service: + pipelines: + traces: + exporters: + - debug + receivers: + - jaeger + telemetry: + metrics: + address: 0.0.0.0:8888 +kind: ConfigMap +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: chainsaw-must-gather.stateful + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: stateful-collector + app.kubernetes.io/part-of: opentelemetry + name: stateful-collector-2729987d + namespace: chainsaw-must-gather +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: stateful-targetallocator + namespace: ($namespace) +data: + targetallocator.yaml: + (parse_yaml(@)): + allocation_strategy: consistent-hashing + collector_selector: + matchlabels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/instance: (join('.', [$namespace, 'stateful'])) + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/part-of: opentelemetry + matchexpressions: [ ] + config: + global: + scrape_interval: 30s + scrape_protocols: + - PrometheusProto + - OpenMetricsText1.0.0 + - OpenMetricsText0.0.1 + - PrometheusText0.0.4 + scrape_configs: + - job_name: otel-collector + scrape_interval: 10s + static_configs: + - targets: + - 0.0.0.0:8888 + filter_strategy: relabel-config diff --git a/tests/e2e-openshift/must-gather/chainsaw-test.yaml b/tests/e2e-openshift/must-gather/chainsaw-test.yaml new file mode 100755 index 0000000000..fa7dbc2e41 --- /dev/null +++ b/tests/e2e-openshift/must-gather/chainsaw-test.yaml @@ -0,0 +1,70 @@ +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + name: must-gather +spec: + namespace: chainsaw-must-gather + steps: + - name: Install Target Allocator + try: + - apply: + template: true + file: install-target-allocator.yaml + - assert: + file: assert-install-target-allocator.yaml + catch: + - podLogs: + selector: app.kubernetes.io/component=opentelemetry-targetallocator + - name: Create instrumentation CR and sidecar collector instance + try: + # In OpenShift, when a namespace is created, all necessary SCC annotations are automatically added. However, if a namespace is created using a resource file with only selected SCCs, the other auto-added SCCs are not included. Therefore, the UID-range and supplemental groups SCC annotations must be set after the namespace is created. + - command: + entrypoint: kubectl + args: + - annotate + - namespace + - ${NAMESPACE} + - openshift.io/sa.scc.uid-range=1000/1000 + - --overwrite + - command: + entrypoint: kubectl + args: + - annotate + - namespace + - ${NAMESPACE} + - openshift.io/sa.scc.supplemental-groups=3000/3000 + - --overwrite + - apply: + file: install-collector-sidecar.yaml + - apply: + file: install-instrumentation.yaml + - name: Install app + try: + - apply: + file: install-app.yaml + - assert: + file: assert-install-app.yaml + catch: + - podLogs: + selector: app=my-nodejs + - name: Run the must-gather and verify the contents + try: + - command: + entrypoint: oc + args: + - get + - pods + - -A + - -l control-plane=controller-manager + - -l app.kubernetes.io/name=opentelemetry-operator + - -o + - jsonpath={.items[0].metadata.namespace} + outputs: + - name: OTEL_NAMESPACE + value: ($stdout) + - script: + env: + - name: otelnamespace + value: ($OTEL_NAMESPACE) + timeout: 5m + content: ./check_must_gather.sh diff --git a/tests/e2e-openshift/must-gather/check_must_gather.sh b/tests/e2e-openshift/must-gather/check_must_gather.sh new file mode 100755 index 0000000000..5722a06e47 --- /dev/null +++ b/tests/e2e-openshift/must-gather/check_must_gather.sh @@ -0,0 +1,51 @@ +#!/bin/bash + +# Create a temporary directory to store must-gather +MUST_GATHER_DIR=$(mktemp -d) + +# Run the must-gather script +oc adm must-gather --dest-dir=$MUST_GATHER_DIR --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather:latest -- /usr/bin/must-gather --operator-namespace $otelnamespace + +# Define required files and directories +REQUIRED_ITEMS=( + event-filter.html + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/olm/*opentelemetry-operato*.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/olm/clusterserviceversion-opentelemetry-operator-v*.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/olm/installplan-install-*.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/olm/subscription-opentelemetry-operator-v*-sub.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/service-stateful-collector-headless.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/service-stateful-collector.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/deployment-stateful-targetallocator.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/service-stateful-collector-monitoring.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/poddisruptionbudget-stateful-targetallocator.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/poddisruptionbudget-stateful-collector.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/service-stateful-targetallocator.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/configmap-stateful-collector-2729987d.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/configmap-stateful-targetallocator.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/statefulset-stateful-collector.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/opentelemetrycollector-stateful.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/stateful/serviceaccount-stateful-collector.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/sidecar/service-sidecar-collector.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/sidecar/opentelemetrycollector-sidecar.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/sidecar/service-sidecar-collector-monitoring.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/sidecar/configmap-sidecar-collector-3826c0e7.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/sidecar/serviceaccount-sidecar-collector.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-must-gather/sidecar/service-sidecar-collector-headless.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/chainsaw-must-gather/instrumentation-nodejs.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/opentelemetry-operator-controller-manager-* + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/deployment-opentelemetry-operator-controller-manager.yaml + timestamp +) + +# Verify each required item +for item in "${REQUIRED_ITEMS[@]}"; do + if ! find "$MUST_GATHER_DIR" -path "$MUST_GATHER_DIR/$item" -print -quit | grep -q .; then + echo "Missing: $item" + exit 1 + else + echo "Found: $item" + fi +done + +# Cleanup the must-gather directory +rm -rf $MUST_GATHER_DIR diff --git a/tests/e2e-openshift/must-gather/install-app.yaml b/tests/e2e-openshift/must-gather/install-app.yaml new file mode 100644 index 0000000000..45de0d5f01 --- /dev/null +++ b/tests/e2e-openshift/must-gather/install-app.yaml @@ -0,0 +1,31 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nodejs +spec: + selector: + matchLabels: + app: my-nodejs + replicas: 1 + template: + metadata: + labels: + app: my-nodejs + annotations: + sidecar.opentelemetry.io/inject: "true" + instrumentation.opentelemetry.io/inject-nodejs: "true" + spec: + securityContext: + runAsUser: 1000 + runAsGroup: 3000 + fsGroup: 3000 + containers: + - name: myapp + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs:main + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] + env: + - name: NODE_PATH + value: /usr/local/lib/node_modules diff --git a/tests/e2e-openshift/must-gather/install-collector-sidecar.yaml b/tests/e2e-openshift/must-gather/install-collector-sidecar.yaml new file mode 100644 index 0000000000..04dabc7008 --- /dev/null +++ b/tests/e2e-openshift/must-gather/install-collector-sidecar.yaml @@ -0,0 +1,22 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: sidecar +spec: + config: + receivers: + otlp: + protocols: + grpc: {} + http: {} + processors: {} + + exporters: + debug: + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + mode: sidecar diff --git a/tests/e2e-openshift/must-gather/install-instrumentation.yaml b/tests/e2e-openshift/must-gather/install-instrumentation.yaml new file mode 100644 index 0000000000..a87939b6c2 --- /dev/null +++ b/tests/e2e-openshift/must-gather/install-instrumentation.yaml @@ -0,0 +1,33 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: Instrumentation +metadata: + name: nodejs +spec: + env: + - name: OTEL_TRACES_EXPORTER + value: otlp + - name: OTEL_EXPORTER_OTLP_ENDPOINT + value: http://localhost:4317 + - name: OTEL_EXPORTER_OTLP_TIMEOUT + value: "20" + - name: OTEL_TRACES_SAMPLER + value: parentbased_traceidratio + - name: OTEL_TRACES_SAMPLER_ARG + value: "0.85" + - name: SPLUNK_TRACE_RESPONSE_HEADER_ENABLED + value: "true" + - name: OTEL_METRICS_EXPORTER + value: prometheus + exporter: + endpoint: http://localhost:4317 + propagators: + - jaeger + - b3 + sampler: + type: parentbased_traceidratio + argument: "0.25" + nodejs: + env: + - name: OTEL_NODEJS_DEBUG + value: "true" + diff --git a/tests/e2e-openshift/must-gather/install-target-allocator.yaml b/tests/e2e-openshift/must-gather/install-target-allocator.yaml new file mode 100644 index 0000000000..0fd12d0958 --- /dev/null +++ b/tests/e2e-openshift/must-gather/install-target-allocator.yaml @@ -0,0 +1,70 @@ +apiVersion: v1 +automountServiceAccountToken: true +kind: ServiceAccount +metadata: + name: ta +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: smoke-targetallocator +rules: +- apiGroups: + - "" + resources: + - pods + - namespaces + verbs: + - get + - list + - watch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: (join('-', ['default-view', $namespace])) +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: smoke-targetallocator +subjects: +- kind: ServiceAccount + name: ta + namespace: ($namespace) +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: stateful +spec: + config: + receivers: + jaeger: + protocols: + grpc: {} + + # Collect own metrics + prometheus: + config: + global: + scrape_interval: 30s + scrape_protocols: ['PrometheusProto','OpenMetricsText1.0.0','OpenMetricsText0.0.1','PrometheusText0.0.4'] + scrape_configs: + - job_name: 'otel-collector' + scrape_interval: 10s + static_configs: + - targets: [ '0.0.0.0:8888' ] + + processors: {} + + exporters: + debug: {} + service: + pipelines: + traces: + receivers: [jaeger] + exporters: [debug] + mode: statefulset + targetAllocator: + enabled: true + serviceAccount: ta diff --git a/tests/e2e-openshift/otlp-metrics-traces/00-install-jaeger.yaml b/tests/e2e-openshift/otlp-metrics-traces/00-install-jaeger.yaml index 7ad775f831..9a2cae1b6b 100644 --- a/tests/e2e-openshift/otlp-metrics-traces/00-install-jaeger.yaml +++ b/tests/e2e-openshift/otlp-metrics-traces/00-install-jaeger.yaml @@ -1,11 +1,4 @@ #For this test case you'll need to install the Jaeger operator (OpenShift Distributed Tracing Platform in OpenShift) - -apiVersion: v1 -kind: Namespace -metadata: - name: chainsaw-otlp-metrics - ---- apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: diff --git a/tests/e2e-openshift/otlp-metrics-traces/02-otel-metrics-collector.yaml b/tests/e2e-openshift/otlp-metrics-traces/02-otel-metrics-collector.yaml index 0ff520ecc2..5a7d056bc8 100644 --- a/tests/e2e-openshift/otlp-metrics-traces/02-otel-metrics-collector.yaml +++ b/tests/e2e-openshift/otlp-metrics-traces/02-otel-metrics-collector.yaml @@ -1,6 +1,6 @@ #https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/prometheusexporter/README.md -apiVersion: opentelemetry.io/v1alpha1 +apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector @@ -10,13 +10,13 @@ spec: observability: metrics: enableMetrics: true - config: | + config: receivers: otlp: protocols: - grpc: - http: - processors: + grpc: {} + http: {} + processors: {} exporters: otlp: endpoint: jaeger-allinone-collector-headless.chainsaw-otlp-metrics.svc:4317 diff --git a/tests/e2e-openshift/otlp-metrics-traces/chainsaw-test.yaml b/tests/e2e-openshift/otlp-metrics-traces/chainsaw-test.yaml index 50ed4185e7..de34a5a232 100755 --- a/tests/e2e-openshift/otlp-metrics-traces/chainsaw-test.yaml +++ b/tests/e2e-openshift/otlp-metrics-traces/chainsaw-test.yaml @@ -7,6 +7,7 @@ metadata: spec: # Avoid running this test case in parallel to prevent the deletion of shared resources used by multiple tests, specifically in the context of OpenShift user workload monitoring. concurrent: false + namespace: chainsaw-otlp-metrics steps: - name: step-00 try: @@ -42,3 +43,24 @@ spec: - script: timeout: 5m content: ./check_metrics.sh + - name: Run the must-gather and verify the contents + try: + - command: + entrypoint: oc + args: + - get + - pods + - -A + - -l control-plane=controller-manager + - -l app.kubernetes.io/name=opentelemetry-operator + - -o + - jsonpath={.items[0].metadata.namespace} + outputs: + - name: OTEL_NAMESPACE + value: ($stdout) + - script: + env: + - name: otelnamespace + value: ($OTEL_NAMESPACE) + timeout: 5m + content: ./check_must_gather.sh diff --git a/tests/e2e-openshift/otlp-metrics-traces/check_must_gather.sh b/tests/e2e-openshift/otlp-metrics-traces/check_must_gather.sh new file mode 100755 index 0000000000..855506b7cc --- /dev/null +++ b/tests/e2e-openshift/otlp-metrics-traces/check_must_gather.sh @@ -0,0 +1,41 @@ +#!/bin/bash + +# Create the directory to store must-gather +MUST_GATHER_DIR=/tmp/otlp-metrics-traces +mkdir -p $MUST_GATHER_DIR + +# Run the must-gather script +oc adm must-gather --dest-dir=$MUST_GATHER_DIR --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather:latest -- /usr/bin/must-gather --operator-namespace $otelnamespace + +# Define required files and directories +REQUIRED_ITEMS=( + event-filter.html + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/olm/clusterserviceversion-opentelemetry-operator-*.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/olm/*opentelemetry-operator*.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/olm/installplan-install-*.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/olm/subscription-opentelemetry-operator-v*-sub.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-otlp-metrics/cluster-collector/service-cluster-collector-collector-headless.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-otlp-metrics/cluster-collector/deployment-cluster-collector-collector.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-otlp-metrics/cluster-collector/service-cluster-collector-collector-monitoring.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-otlp-metrics/cluster-collector/opentelemetrycollector-cluster-collector.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-otlp-metrics/cluster-collector/configmap-cluster-collector-collector-57b76c99.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-otlp-metrics/cluster-collector/serviceaccount-cluster-collector-collector.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-otlp-metrics/cluster-collector/service-cluster-collector-collector.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/namespaces/chainsaw-otlp-metrics/cluster-collector/poddisruptionbudget-cluster-collector-collector.yaml + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/opentelemetry-operator-controller-manager-* + ghcr-io-open-telemetry-opentelemetry-operator-must-gather-sha256-*/deployment-opentelemetry-operator-controller-manager.yaml + timestamp +) + +# Verify each required item +for item in "${REQUIRED_ITEMS[@]}"; do + if ! find "$MUST_GATHER_DIR" -path "$MUST_GATHER_DIR/$item" -print -quit | grep -q .; then + echo "Missing: $item" + exit 1 + else + echo "Found: $item" + fi +done + +# Cleanup the must-gather directory +rm -rf $MUST_GATHER_DIR diff --git a/tests/e2e-ta-collector-mtls/certmanager-permissions/certmanager.yaml b/tests/e2e-ta-collector-mtls/certmanager-permissions/certmanager.yaml new file mode 100644 index 0000000000..1ef192378a --- /dev/null +++ b/tests/e2e-ta-collector-mtls/certmanager-permissions/certmanager.yaml @@ -0,0 +1,17 @@ +- op: add + path: /rules/- + value: + apiGroups: + - cert-manager.io + resources: + - issuers + - certificaterequests + - certificates + verbs: + - create + - get + - list + - watch + - update + - patch + - delete \ No newline at end of file diff --git a/tests/e2e-ta-collector-mtls/ta-collector-mtls/00-assert.yaml b/tests/e2e-ta-collector-mtls/ta-collector-mtls/00-assert.yaml new file mode 100644 index 0000000000..db11c838bd --- /dev/null +++ b/tests/e2e-ta-collector-mtls/ta-collector-mtls/00-assert.yaml @@ -0,0 +1,89 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: prometheus-cr-collector +status: + readyReplicas: 1 + replicas: 1 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: prometheus-cr-targetallocator +status: + observedGeneration: 1 + readyReplicas: 1 + replicas: 1 +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-cr-targetallocator +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: prometheus-cr-ca-cert +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: prometheus-cr-ta-server-cert +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: prometheus-cr-ta-client-cert +--- +apiVersion: v1 +data: + collector.yaml: | + exporters: + prometheus: + endpoint: 0.0.0.0:9090 + receivers: + prometheus: + config: {} + target_allocator: + collector_id: ${POD_NAME} + endpoint: https://prometheus-cr-targetallocator:443 + interval: 30s + tls: + ca_file: /tls/ca.crt + cert_file: /tls/tls.crt + key_file: /tls/tls.key + service: + pipelines: + metrics: + exporters: + - prometheus + receivers: + - prometheus + telemetry: + metrics: + address: 0.0.0.0:8888 +kind: ConfigMap +metadata: + name: prometheus-cr-collector-19c94a81 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + app.kubernetes.io/component: opentelemetry-targetallocator + app.kubernetes.io/managed-by: opentelemetry-operator +spec: + containers: + - name: ta-container + ports: + - containerPort: 8080 + name: http + - containerPort: 8443 + name: https +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: check-ta-serving-over-https +status: + succeeded: 1 diff --git a/tests/e2e-ta-collector-mtls/ta-collector-mtls/00-install.yaml b/tests/e2e-ta-collector-mtls/ta-collector-mtls/00-install.yaml new file mode 100644 index 0000000000..5d0359b079 --- /dev/null +++ b/tests/e2e-ta-collector-mtls/ta-collector-mtls/00-install.yaml @@ -0,0 +1,187 @@ +apiVersion: v1 +automountServiceAccountToken: true +kind: ServiceAccount +metadata: + name: ta +--- +apiVersion: v1 +automountServiceAccountToken: true +kind: ServiceAccount +metadata: + name: collector +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: (join('-', ['ta', $namespace])) +rules: +- apiGroups: + - "" + resources: + - pods + - nodes + - services + - endpoints + - configmaps + - secrets + - namespaces + verbs: + - get + - watch + - list +- apiGroups: + - apps + resources: + - statefulsets + - services + - endpoints + verbs: + - get + - watch + - list +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - get + - watch + - list +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - get + - watch + - list +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + - podmonitors + verbs: + - get + - watch + - list +- nonResourceURLs: + - /metrics + verbs: + - get +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: (join('-', ['collector', $namespace])) +rules: +- apiGroups: + - "" + resources: + - pods + - nodes + - nodes/metrics + - services + - endpoints + - namespaces + verbs: + - get + - watch + - list +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - get + - watch + - list +- nonResourceURLs: + - /metrics + - /metrics/cadvisor + verbs: + - get +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: (join('-', ['ta', $namespace])) +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: (join('-', ['ta', $namespace])) +subjects: +- kind: ServiceAccount + name: ta + namespace: ($namespace) +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: (join('-', ['collector', $namespace])) +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: (join('-', ['collector', $namespace])) +subjects: +- kind: ServiceAccount + name: collector + namespace: ($namespace) +--- +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: prometheus-cr +spec: + config: | + receivers: + prometheus: + config: + scrape_configs: [] + + processors: + + exporters: + prometheus: + endpoint: 0.0.0.0:9090 + service: + pipelines: + metrics: + receivers: [prometheus] + exporters: [prometheus] + mode: statefulset + serviceAccount: collector + targetAllocator: + enabled: true + prometheusCR: + enabled: true + scrapeInterval: 1s + serviceAccount: ta +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: check-ta-serving-over-https +spec: + template: + spec: + restartPolicy: OnFailure + containers: + - name: check-ta + image: curlimages/curl + volumeMounts: + - name: tls-secret + mountPath: /etc/tls + readOnly: true + args: + - /bin/sh + - -c + - | + curl -s \ + --cert /etc/tls/tls.crt \ + --key /etc/tls/tls.key \ + --cacert /etc/tls/ca.crt \ + https://prometheus-cr-targetallocator:443 + volumes: + - name: tls-secret + secret: + secretName: prometheus-cr-ta-client-cert diff --git a/tests/e2e-ta-collector-mtls/ta-collector-mtls/01-assert.yaml b/tests/e2e-ta-collector-mtls/ta-collector-mtls/01-assert.yaml new file mode 100644 index 0000000000..e4f67bf8d4 --- /dev/null +++ b/tests/e2e-ta-collector-mtls/ta-collector-mtls/01-assert.yaml @@ -0,0 +1,29 @@ +apiVersion: v1 +kind: Secret +metadata: + name: metrics-app-secret +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: metrics-app + labels: + app: metrics-app +status: + observedGeneration: 1 + readyReplicas: 1 + replicas: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: metrics-service + labels: + app: metrics-app +--- +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: metrics-servicemonitor + labels: + app: metrics-app \ No newline at end of file diff --git a/tests/e2e-ta-collector-mtls/ta-collector-mtls/01-install.yaml b/tests/e2e-ta-collector-mtls/ta-collector-mtls/01-install.yaml new file mode 100644 index 0000000000..30bc058eae --- /dev/null +++ b/tests/e2e-ta-collector-mtls/ta-collector-mtls/01-install.yaml @@ -0,0 +1,78 @@ +apiVersion: v1 +kind: Secret +metadata: + name: metrics-app-secret +type: Opaque +stringData: + BASIC_AUTH_USERNAME: user + BASIC_AUTH_PASSWORD: t0p$ecreT +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: metrics-app + labels: + app: metrics-app +spec: + replicas: 1 + selector: + matchLabels: + app: metrics-app + template: + metadata: + labels: + app: metrics-app + spec: + containers: + - name: metrics-app + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-metrics-basic-auth:main + ports: + - containerPort: 9123 + env: + - name: BASIC_AUTH_USERNAME + valueFrom: + secretKeyRef: + name: metrics-app-secret + key: BASIC_AUTH_USERNAME + - name: BASIC_AUTH_PASSWORD + valueFrom: + secretKeyRef: + name: metrics-app-secret + key: BASIC_AUTH_PASSWORD +--- +apiVersion: v1 +kind: Service +metadata: + name: metrics-service + labels: + app: metrics-app +spec: + ports: + - name: metrics + port: 9123 + targetPort: 9123 + protocol: TCP + selector: + app: metrics-app + type: ClusterIP +--- +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: metrics-servicemonitor + labels: + app: metrics-app +spec: + selector: + matchLabels: + app: metrics-app + endpoints: + - port: metrics + interval: 30s + basicAuth: + username: + name: metrics-app-secret + key: BASIC_AUTH_USERNAME + password: + name: metrics-app-secret + key: BASIC_AUTH_PASSWORD diff --git a/tests/e2e-ta-collector-mtls/ta-collector-mtls/02-assert.yaml b/tests/e2e-ta-collector-mtls/ta-collector-mtls/02-assert.yaml new file mode 100644 index 0000000000..b3b95bf022 --- /dev/null +++ b/tests/e2e-ta-collector-mtls/ta-collector-mtls/02-assert.yaml @@ -0,0 +1,20 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: check-metrics +status: + succeeded: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: check-ta-jobs +status: + succeeded: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: check-ta-scrape-configs +status: + succeeded: 1 \ No newline at end of file diff --git a/tests/e2e-ta-collector-mtls/ta-collector-mtls/02-install.yaml b/tests/e2e-ta-collector-mtls/ta-collector-mtls/02-install.yaml new file mode 100644 index 0000000000..5e45f4e150 --- /dev/null +++ b/tests/e2e-ta-collector-mtls/ta-collector-mtls/02-install.yaml @@ -0,0 +1,63 @@ +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: prometheus-cr +spec: + endpoints: + - port: monitoring + selector: + matchLabels: + app.kubernetes.io/managed-by: opentelemetry-operator +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: check-metrics +spec: + template: + spec: + restartPolicy: OnFailure + containers: + - name: check-metrics + image: curlimages/curl + args: + - /bin/sh + - -c + - | + for i in $(seq 30); do + if curl -m 1 -s http://prometheus-cr-collector:9090/metrics | grep "Client was authenticated"; then exit 0; fi + sleep 5 + done + exit 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: check-ta-jobs +spec: + template: + spec: + restartPolicy: OnFailure + containers: + - name: check-metrics + image: curlimages/curl + args: + - /bin/sh + - -c + - curl -s http://prometheus-cr-targetallocator/scrape_configs | grep "prometheus-cr" +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: check-ta-scrape-configs +spec: + template: + spec: + restartPolicy: OnFailure + containers: + - name: check-metrics + image: curlimages/curl + args: + - /bin/sh + - -c + - curl -s http://prometheus-cr-targetallocator/jobs | grep "prometheus-cr" diff --git a/tests/e2e-ta-collector-mtls/ta-collector-mtls/chainsaw-test.yaml b/tests/e2e-ta-collector-mtls/ta-collector-mtls/chainsaw-test.yaml new file mode 100755 index 0000000000..6db3baf206 --- /dev/null +++ b/tests/e2e-ta-collector-mtls/ta-collector-mtls/chainsaw-test.yaml @@ -0,0 +1,34 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + creationTimestamp: null + name: targetallocator-collector-mtls +spec: + steps: + - name: step-00 + try: + - apply: + template: true + file: 00-install.yaml + - assert: + file: 00-assert.yaml + catch: + - podLogs: + selector: app.kubernetes.io/managed-by=opentelemetry-operator + - name: step-01 + try: + - apply: + file: 01-install.yaml + - assert: + file: 01-assert.yaml + - name: step-02 + try: + - apply: + template: true + file: 02-install.yaml + - assert: + file: 02-assert.yaml + catch: + - podLogs: + selector: app.kubernetes.io/managed-by=opentelemetry-operator \ No newline at end of file diff --git a/tests/e2e-targetallocator-cr/targetallocator-label/00-assert.yaml b/tests/e2e-targetallocator-cr/targetallocator-label/00-assert.yaml new file mode 100644 index 0000000000..7aa573eda7 --- /dev/null +++ b/tests/e2e-targetallocator-cr/targetallocator-label/00-assert.yaml @@ -0,0 +1,40 @@ +--- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + app.kubernetes.io/name: ta-collector +data: + collector.yaml: | + receivers: + prometheus: + config: + scrape_configs: + - job_name: otel-collector + scrape_interval: 10s + static_configs: + - targets: + - 0.0.0.0:8888 + exporters: + debug: {} + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + metrics: + exporters: + - debug + receivers: + - prometheus + +--- +apiVersion: v1 +data: + targetallocator.yaml: | + allocation_strategy: consistent-hashing + collector_selector: null + filter_strategy: "" +kind: ConfigMap +metadata: + name: ta-targetallocator \ No newline at end of file diff --git a/tests/e2e-targetallocator-cr/targetallocator-label/00-install.yaml b/tests/e2e-targetallocator-cr/targetallocator-label/00-install.yaml new file mode 100644 index 0000000000..b905f8d88e --- /dev/null +++ b/tests/e2e-targetallocator-cr/targetallocator-label/00-install.yaml @@ -0,0 +1,30 @@ +--- +apiVersion: opentelemetry.io/v1alpha1 +kind: TargetAllocator +metadata: + name: ta +spec: +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: ta +spec: + mode: statefulset + config: + receivers: + prometheus: + config: + scrape_configs: + - job_name: 'otel-collector' + scrape_interval: 10s + static_configs: + - targets: [ '0.0.0.0:8888' ] + exporters: + debug: {} + service: + pipelines: + metrics: + receivers: [prometheus] + exporters: [debug] + diff --git a/tests/e2e-targetallocator-cr/targetallocator-label/01-add-ta-label.yaml b/tests/e2e-targetallocator-cr/targetallocator-label/01-add-ta-label.yaml new file mode 100644 index 0000000000..1e12d1b698 --- /dev/null +++ b/tests/e2e-targetallocator-cr/targetallocator-label/01-add-ta-label.yaml @@ -0,0 +1,26 @@ +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: ta + labels: + opentelemetry.io/target-allocator: ta +spec: + mode: statefulset + config: + receivers: + prometheus: + config: + scrape_configs: + - job_name: 'otel-collector' + scrape_interval: 10s + static_configs: + - targets: [ '0.0.0.0:8888' ] + exporters: + debug: {} + service: + pipelines: + metrics: + receivers: [prometheus] + exporters: [debug] + diff --git a/tests/e2e-targetallocator-cr/targetallocator-label/01-assert.yaml b/tests/e2e-targetallocator-cr/targetallocator-label/01-assert.yaml new file mode 100644 index 0000000000..c492114cb9 --- /dev/null +++ b/tests/e2e-targetallocator-cr/targetallocator-label/01-assert.yaml @@ -0,0 +1,39 @@ +--- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + app.kubernetes.io/name: ta-collector +data: + collector.yaml: | + exporters: + debug: {} + receivers: + prometheus: + config: {} + target_allocator: + collector_id: ${POD_NAME} + endpoint: http://ta-targetallocator:80 + interval: 30s + service: + pipelines: + metrics: + exporters: + - debug + receivers: + - prometheus + telemetry: + metrics: + address: 0.0.0.0:8888 +--- +apiVersion: v1 +data: + targetallocator.yaml: + ( contains(@, join(':', ['app.kubernetes.io/component', ' opentelemetry-collector'])) ): true + ( contains(@, join('', ['app.kubernetes.io/instance:', ' ', $namespace, '.ta'])) ): true + ( contains(@, join(':', ['app.kubernetes.io/managed-by', ' opentelemetry-operator'])) ): true + ( contains(@, join(':', ['app.kubernetes.io/part-of', ' opentelemetry'])) ): true + ( contains(@, join(':', ['job_name', ' otel-collector'])) ): true +kind: ConfigMap +metadata: + name: ta-targetallocator \ No newline at end of file diff --git a/tests/e2e-targetallocator-cr/targetallocator-label/02-assert.yaml b/tests/e2e-targetallocator-cr/targetallocator-label/02-assert.yaml new file mode 100644 index 0000000000..7e0caf5f8e --- /dev/null +++ b/tests/e2e-targetallocator-cr/targetallocator-label/02-assert.yaml @@ -0,0 +1,39 @@ +--- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + app.kubernetes.io/name: ta-collector +data: + collector.yaml: | + exporters: + debug: {} + receivers: + prometheus: + config: {} + target_allocator: + collector_id: ${POD_NAME} + endpoint: http://ta-targetallocator:80 + interval: 30s + service: + pipelines: + metrics: + exporters: + - debug + receivers: + - prometheus + telemetry: + metrics: + address: 0.0.0.0:8888 +--- +apiVersion: v1 +data: + targetallocator.yaml: + ( contains(@, join(':', ['app.kubernetes.io/component', ' opentelemetry-collector'])) ): true + ( contains(@, join('', ['app.kubernetes.io/instance:', ' ', $namespace, '.ta'])) ): true + ( contains(@, join(':', ['app.kubernetes.io/managed-by', ' opentelemetry-operator'])) ): true + ( contains(@, join(':', ['app.kubernetes.io/part-of', ' opentelemetry'])) ): true + ( contains(@, join(':', ['job_name', ' otel-collector'])) ): false +kind: ConfigMap +metadata: + name: ta-targetallocator \ No newline at end of file diff --git a/tests/e2e-targetallocator-cr/targetallocator-label/02-change-collector-config.yaml b/tests/e2e-targetallocator-cr/targetallocator-label/02-change-collector-config.yaml new file mode 100644 index 0000000000..53cf1e598f --- /dev/null +++ b/tests/e2e-targetallocator-cr/targetallocator-label/02-change-collector-config.yaml @@ -0,0 +1,22 @@ +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: ta + labels: + opentelemetry.io/target-allocator: ta +spec: + mode: statefulset + config: + receivers: + prometheus: + config: + scrape_configs: [] + exporters: + debug: {} + service: + pipelines: + metrics: + receivers: [prometheus] + exporters: [debug] + diff --git a/tests/e2e-targetallocator-cr/targetallocator-label/03-assert.yaml b/tests/e2e-targetallocator-cr/targetallocator-label/03-assert.yaml new file mode 100644 index 0000000000..54bdf3c6e9 --- /dev/null +++ b/tests/e2e-targetallocator-cr/targetallocator-label/03-assert.yaml @@ -0,0 +1,10 @@ +--- +apiVersion: v1 +data: + targetallocator.yaml: | + allocation_strategy: consistent-hashing + collector_selector: null + filter_strategy: "" +kind: ConfigMap +metadata: + name: ta-targetallocator \ No newline at end of file diff --git a/tests/e2e-targetallocator-cr/targetallocator-label/chainsaw-test.yaml b/tests/e2e-targetallocator-cr/targetallocator-label/chainsaw-test.yaml new file mode 100755 index 0000000000..50e0e85483 --- /dev/null +++ b/tests/e2e-targetallocator-cr/targetallocator-label/chainsaw-test.yaml @@ -0,0 +1,50 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + name: targetallocator-label +spec: + steps: + - name: step-00 + try: + - apply: + template: true + file: 00-install.yaml + - assert: + file: 00-assert.yaml + catch: + - podLogs: + selector: app.kubernetes.io/name=opentelemetry-operator + - name: step-01 + try: + - apply: + template: true + file: 01-add-ta-label.yaml + - assert: + file: 01-assert.yaml + catch: + - podLogs: + selector: app.kubernetes.io/name=opentelemetry-operator + - name: step-02 + try: + - apply: + template: true + file: 02-change-collector-config.yaml + - assert: + file: 02-assert.yaml + catch: + - podLogs: + selector: app.kubernetes.io/name=opentelemetry-operator + - name: step-03 + try: + - delete: + ref: + apiVersion: opentelemetry.io/v1beta1 + kind: OpenTelemetryCollector + name: ta + - assert: + file: 03-assert.yaml + catch: + - podLogs: + selector: app.kubernetes.io/name=opentelemetry-operator + \ No newline at end of file diff --git a/tests/e2e-targetallocator/targetallocator-features/00-assert.yaml b/tests/e2e-targetallocator/targetallocator-features/00-assert.yaml index fb1aaebc23..e89f3f31eb 100644 --- a/tests/e2e-targetallocator/targetallocator-features/00-assert.yaml +++ b/tests/e2e-targetallocator/targetallocator-features/00-assert.yaml @@ -20,7 +20,7 @@ spec: items: - key: collector.yaml path: collector.yaml - name: stateful-collector-85dbe673 + name: stateful-collector-c055e8e3 name: otc-internal - emptyDir: {} name: testvolume diff --git a/tests/e2e-targetallocator/targetallocator-features/00-install.yaml b/tests/e2e-targetallocator/targetallocator-features/00-install.yaml index 26eed14f12..9213d607a4 100644 --- a/tests/e2e-targetallocator/targetallocator-features/00-install.yaml +++ b/tests/e2e-targetallocator/targetallocator-features/00-install.yaml @@ -93,7 +93,6 @@ spec: runAsUser: 1000 prometheusCR: enabled: true - filterStrategy: "" securityContext: capabilities: add: diff --git a/tests/e2e-targetallocator/targetallocator-kubernetessd/00-assert.yaml b/tests/e2e-targetallocator/targetallocator-kubernetessd/00-assert.yaml index 93f7e176a2..1a5b0b9dab 100644 --- a/tests/e2e-targetallocator/targetallocator-kubernetessd/00-assert.yaml +++ b/tests/e2e-targetallocator/targetallocator-kubernetessd/00-assert.yaml @@ -15,7 +15,7 @@ metadata: apiVersion: v1 kind: ConfigMap metadata: - name: prometheus-kubernetessd-collector-699cdaa1 + name: prometheus-kubernetessd-collector-9c184e3a data: collector.yaml: | exporters: @@ -35,6 +35,9 @@ data: - prometheus receivers: - prometheus + telemetry: + metrics: + address: 0.0.0.0:8888 --- apiVersion: apps/v1 kind: DaemonSet diff --git a/tests/e2e-targetallocator/targetallocator-prometheuscr/00-assert.yaml b/tests/e2e-targetallocator/targetallocator-prometheuscr/00-assert.yaml index dd705e927b..5185c911cb 100644 --- a/tests/e2e-targetallocator/targetallocator-prometheuscr/00-assert.yaml +++ b/tests/e2e-targetallocator/targetallocator-prometheuscr/00-assert.yaml @@ -40,6 +40,9 @@ data: - prometheus receivers: - prometheus + telemetry: + metrics: + address: 0.0.0.0:8888 kind: ConfigMap metadata: - name: prometheus-cr-collector-52e1d2ae + name: prometheus-cr-collector-19c94a81 diff --git a/tests/e2e-upgrade/upgrade-test/opentelemetry-operator-v0.86.0.yaml b/tests/e2e-upgrade/upgrade-test/opentelemetry-operator-v0.86.0.yaml index cc8a9cac64..6ab2e4ac6e 100644 --- a/tests/e2e-upgrade/upgrade-test/opentelemetry-operator-v0.86.0.yaml +++ b/tests/e2e-upgrade/upgrade-test/opentelemetry-operator-v0.86.0.yaml @@ -8348,7 +8348,7 @@ spec: - --upstream=http://127.0.0.1:8080/ - --logtostderr=true - --v=0 - image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1 + image: quay.io/brancz/kube-rbac-proxy:v0.13.1 name: kube-rbac-proxy ports: - containerPort: 8443 diff --git a/tests/e2e/additional-containers-collector/00-assert-daemonset-without-additional-containers.yaml b/tests/e2e/additional-containers-collector/00-assert-daemonset-without-additional-containers.yaml new file mode 100644 index 0000000000..9c9a9588a2 --- /dev/null +++ b/tests/e2e/additional-containers-collector/00-assert-daemonset-without-additional-containers.yaml @@ -0,0 +1,14 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + additional-containers: without +spec: + template: + spec: + (containers[?image == 'alpine' && name == 'alpine']): + (length(@)): 0 diff --git a/tests/e2e/additional-containers-collector/00-assert-deployment-without-additional-containers.yaml b/tests/e2e/additional-containers-collector/00-assert-deployment-without-additional-containers.yaml new file mode 100644 index 0000000000..8fbcd2fb4b --- /dev/null +++ b/tests/e2e/additional-containers-collector/00-assert-deployment-without-additional-containers.yaml @@ -0,0 +1,14 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + additional-containers: without +spec: + template: + spec: + (containers[?image == 'alpine' && name == 'alpine']): + (length(@)): 0 diff --git a/tests/e2e/additional-containers-collector/00-assert-statefulset-without-additional-containers.yaml b/tests/e2e/additional-containers-collector/00-assert-statefulset-without-additional-containers.yaml new file mode 100644 index 0000000000..25896fd560 --- /dev/null +++ b/tests/e2e/additional-containers-collector/00-assert-statefulset-without-additional-containers.yaml @@ -0,0 +1,14 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + additional-containers: without +spec: + template: + spec: + (containers[?image == 'alpine' && name == 'alpine']): + (length(@)): 0 diff --git a/tests/e2e/additional-containers-collector/00-install-collectors-without-additional-containers.yaml b/tests/e2e/additional-containers-collector/00-install-collectors-without-additional-containers.yaml new file mode 100644 index 0000000000..392395ce4d --- /dev/null +++ b/tests/e2e/additional-containers-collector/00-install-collectors-without-additional-containers.yaml @@ -0,0 +1,73 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + labels: + additional-containers: without +spec: + mode: deployment + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + labels: + additional-containers: without +spec: + mode: daemonset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + labels: + additional-containers: without +spec: + mode: statefulset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/additional-containers-collector/01-assert-daemonset-with-additional-containers.yaml b/tests/e2e/additional-containers-collector/01-assert-daemonset-with-additional-containers.yaml new file mode 100644 index 0000000000..77427d8cef --- /dev/null +++ b/tests/e2e/additional-containers-collector/01-assert-daemonset-with-additional-containers.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + additional-containers: with +spec: + template: + spec: + (containers[?image == 'alpine' && name == 'alpine']): + (length(@)): 1 + (containers[?image == 'alpine' && name == 'alpine2']): + (length(@)): 1 diff --git a/tests/e2e/additional-containers-collector/01-assert-deployment-with-additional-containers.yaml b/tests/e2e/additional-containers-collector/01-assert-deployment-with-additional-containers.yaml new file mode 100644 index 0000000000..cae1197c53 --- /dev/null +++ b/tests/e2e/additional-containers-collector/01-assert-deployment-with-additional-containers.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + additional-containers: with +spec: + template: + spec: + (containers[?image == 'alpine' && name == 'alpine']): + (length(@)): 1 + (containers[?image == 'alpine' && name == 'alpine2']): + (length(@)): 1 diff --git a/tests/e2e/additional-containers-collector/01-assert-statefulset-with-additional-containers.yaml b/tests/e2e/additional-containers-collector/01-assert-statefulset-with-additional-containers.yaml new file mode 100644 index 0000000000..34496ba36c --- /dev/null +++ b/tests/e2e/additional-containers-collector/01-assert-statefulset-with-additional-containers.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + additional-containers: with +spec: + template: + spec: + (containers[?image == 'alpine' && name == 'alpine']): + (length(@)): 1 + (containers[?image == 'alpine' && name == 'alpine2']): + (length(@)): 1 diff --git a/tests/e2e/additional-containers-collector/01-install-collectors-with-additional-containers.yaml b/tests/e2e/additional-containers-collector/01-install-collectors-with-additional-containers.yaml new file mode 100644 index 0000000000..ae03e35ee1 --- /dev/null +++ b/tests/e2e/additional-containers-collector/01-install-collectors-with-additional-containers.yaml @@ -0,0 +1,88 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + labels: + additional-containers: with +spec: + mode: deployment + additionalContainers: + - image: alpine + name: alpine + - image: alpine + name: alpine2 + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + labels: + additional-containers: with +spec: + mode: daemonset + additionalContainers: + - image: alpine + name: alpine + - image: alpine + name: alpine2 + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + labels: + additional-containers: with +spec: + mode: statefulset + additionalContainers: + - image: alpine + name: alpine + - image: alpine + name: alpine2 + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/additional-containers-collector/02-assert-daemonset-with-modified-additional-containers.yaml b/tests/e2e/additional-containers-collector/02-assert-daemonset-with-modified-additional-containers.yaml new file mode 100644 index 0000000000..7d45a9bad7 --- /dev/null +++ b/tests/e2e/additional-containers-collector/02-assert-daemonset-with-modified-additional-containers.yaml @@ -0,0 +1,18 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + additional-containers: with +spec: + template: + spec: + (containers[?image == 'alpine' && name == 'alpine']): + (length(@)): 0 + (containers[?image == 'alpine' && name == 'alpine2']): + (length(@)): 0 + (containers[?image == 'alpine' && name == 'alpine3']): + (length(@)): 1 diff --git a/tests/e2e/additional-containers-collector/02-assert-deployment-with-modified-additional-containers.yaml b/tests/e2e/additional-containers-collector/02-assert-deployment-with-modified-additional-containers.yaml new file mode 100644 index 0000000000..12fc910899 --- /dev/null +++ b/tests/e2e/additional-containers-collector/02-assert-deployment-with-modified-additional-containers.yaml @@ -0,0 +1,18 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + additional-containers: with +spec: + template: + spec: + (containers[?image == 'alpine' && name == 'alpine']): + (length(@)): 0 + (containers[?image == 'alpine' && name == 'alpine2']): + (length(@)): 0 + (containers[?image == 'alpine' && name == 'alpine3']): + (length(@)): 1 diff --git a/tests/e2e/additional-containers-collector/02-assert-statefulset-with-modified-additional-containers.yaml b/tests/e2e/additional-containers-collector/02-assert-statefulset-with-modified-additional-containers.yaml new file mode 100644 index 0000000000..4de3a7ffcd --- /dev/null +++ b/tests/e2e/additional-containers-collector/02-assert-statefulset-with-modified-additional-containers.yaml @@ -0,0 +1,18 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + additional-containers: with +spec: + template: + spec: + (containers[?image == 'alpine' && name == 'alpine']): + (length(@)): 0 + (containers[?image == 'alpine' && name == 'alpine2']): + (length(@)): 0 + (containers[?image == 'alpine' && name == 'alpine3']): + (length(@)): 1 diff --git a/tests/e2e/additional-containers-collector/02-modify-collectors-additional-containers.yaml b/tests/e2e/additional-containers-collector/02-modify-collectors-additional-containers.yaml new file mode 100644 index 0000000000..45397baece --- /dev/null +++ b/tests/e2e/additional-containers-collector/02-modify-collectors-additional-containers.yaml @@ -0,0 +1,82 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + labels: + additional-containers: with +spec: + mode: deployment + additionalContainers: + - image: alpine + name: alpine3 + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + labels: + additional-containers: with +spec: + mode: daemonset + additionalContainers: + - image: alpine + name: alpine3 + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + labels: + additional-containers: with +spec: + mode: statefulset + additionalContainers: + - image: alpine + name: alpine3 + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/additional-containers-collector/chainsaw-test.yaml b/tests/e2e/additional-containers-collector/chainsaw-test.yaml new file mode 100644 index 0000000000..64b47db477 --- /dev/null +++ b/tests/e2e/additional-containers-collector/chainsaw-test.yaml @@ -0,0 +1,66 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + name: additional-containers-collector +spec: + steps: + - name: step-00 + description: collectors without additionalContainers + try: + - apply: + file: 00-install-collectors-without-additional-containers.yaml + # deployment + - assert: + file: 00-assert-deployment-without-additional-containers.yaml + # daemonset + - assert: + file: 00-assert-daemonset-without-additional-containers.yaml + # statefulset + - assert: + file: 00-assert-statefulset-without-additional-containers.yaml + + - name: step-01 + description: collectors with additionalContainers + try: + - update: + file: 01-install-collectors-with-additional-containers.yaml + # deployment + - assert: + file: 01-assert-deployment-with-additional-containers.yaml + # daemonset + - assert: + file: 01-assert-daemonset-with-additional-containers.yaml + # statefulset + - assert: + file: 01-assert-statefulset-with-additional-containers.yaml + + - name: step-02 + description: modify additionalContainers + try: + - update: + file: 02-modify-collectors-additional-containers.yaml + # deployment + - assert: + file: 02-assert-deployment-with-modified-additional-containers.yaml + # daemonset + - assert: + file: 02-assert-daemonset-with-modified-additional-containers.yaml + # statefulset + - assert: + file: 02-assert-statefulset-with-modified-additional-containers.yaml + + - name: step-03 + description: delete additionalContainers + try: + - update: + file: 00-install-collectors-without-additional-containers.yaml + # deployment + - assert: + file: 00-assert-deployment-without-additional-containers.yaml + # daemonset + - assert: + file: 00-assert-daemonset-without-additional-containers.yaml + # statefulset + - assert: + file: 00-assert-statefulset-without-additional-containers.yaml diff --git a/tests/e2e/affinity-collector/00-assert-daemonset-without-affinity.yaml b/tests/e2e/affinity-collector/00-assert-daemonset-without-affinity.yaml new file mode 100644 index 0000000000..5f3c249b12 --- /dev/null +++ b/tests/e2e/affinity-collector/00-assert-daemonset-without-affinity.yaml @@ -0,0 +1,13 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + affinity: without +spec: + template: + spec: + (affinity == null): true diff --git a/tests/e2e/affinity-collector/00-assert-deployment-without-affinity.yaml b/tests/e2e/affinity-collector/00-assert-deployment-without-affinity.yaml new file mode 100644 index 0000000000..2af7d10f9f --- /dev/null +++ b/tests/e2e/affinity-collector/00-assert-deployment-without-affinity.yaml @@ -0,0 +1,13 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + affinity: without +spec: + template: + spec: + (affinity == null): true diff --git a/tests/e2e/affinity-collector/00-assert-statefulset-without-affinity.yaml b/tests/e2e/affinity-collector/00-assert-statefulset-without-affinity.yaml new file mode 100644 index 0000000000..2e267cf6a4 --- /dev/null +++ b/tests/e2e/affinity-collector/00-assert-statefulset-without-affinity.yaml @@ -0,0 +1,13 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + affinity: without +spec: + template: + spec: + (affinity == null): true diff --git a/tests/e2e/affinity-collector/00-install-collectors-without-affinity.yaml b/tests/e2e/affinity-collector/00-install-collectors-without-affinity.yaml new file mode 100644 index 0000000000..77e50adc36 --- /dev/null +++ b/tests/e2e/affinity-collector/00-install-collectors-without-affinity.yaml @@ -0,0 +1,73 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + labels: + affinity: without +spec: + mode: deployment + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + labels: + affinity: without +spec: + mode: daemonset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + labels: + affinity: without +spec: + mode: statefulset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/affinity-collector/01-assert-daemonset-with-affinity.yaml b/tests/e2e/affinity-collector/01-assert-daemonset-with-affinity.yaml new file mode 100644 index 0000000000..9abf98b444 --- /dev/null +++ b/tests/e2e/affinity-collector/01-assert-daemonset-with-affinity.yaml @@ -0,0 +1,13 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + affinity: with +spec: + template: + spec: + (affinity != null): true diff --git a/tests/e2e/affinity-collector/01-assert-deployment-with-affinity.yaml b/tests/e2e/affinity-collector/01-assert-deployment-with-affinity.yaml new file mode 100644 index 0000000000..114bc50253 --- /dev/null +++ b/tests/e2e/affinity-collector/01-assert-deployment-with-affinity.yaml @@ -0,0 +1,13 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + affinity: with +spec: + template: + spec: + (affinity != null): true diff --git a/tests/e2e/affinity-collector/01-assert-statefulset-with-affinity.yaml b/tests/e2e/affinity-collector/01-assert-statefulset-with-affinity.yaml new file mode 100644 index 0000000000..64e580f02a --- /dev/null +++ b/tests/e2e/affinity-collector/01-assert-statefulset-with-affinity.yaml @@ -0,0 +1,13 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + affinity: with +spec: + template: + spec: + (affinity != null): true diff --git a/tests/e2e/affinity-collector/01-install-collectors-with-affinity.yaml b/tests/e2e/affinity-collector/01-install-collectors-with-affinity.yaml new file mode 100644 index 0000000000..95fcac394a --- /dev/null +++ b/tests/e2e/affinity-collector/01-install-collectors-with-affinity.yaml @@ -0,0 +1,100 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + labels: + affinity: with +spec: + mode: deployment + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/os + operator: In + values: + - linux + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + labels: + affinity: with +spec: + mode: daemonset + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/os + operator: In + values: + - linux + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + labels: + affinity: with +spec: + mode: statefulset + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/os + operator: In + values: + - linux + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/affinity-collector/02-assert-daemonset-with-modified-affinity.yaml b/tests/e2e/affinity-collector/02-assert-daemonset-with-modified-affinity.yaml new file mode 100644 index 0000000000..8ec0913dad --- /dev/null +++ b/tests/e2e/affinity-collector/02-assert-daemonset-with-modified-affinity.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + affinity: with +spec: + template: + spec: + affinity: + nodeAffinity: + (requiredDuringSchedulingIgnoredDuringExecution == null): true + (preferredDuringSchedulingIgnoredDuringExecution != null): true diff --git a/tests/e2e/affinity-collector/02-assert-deployment-with-modified-affinity.yaml b/tests/e2e/affinity-collector/02-assert-deployment-with-modified-affinity.yaml new file mode 100644 index 0000000000..34d594fe30 --- /dev/null +++ b/tests/e2e/affinity-collector/02-assert-deployment-with-modified-affinity.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + affinity: with +spec: + template: + spec: + affinity: + nodeAffinity: + (requiredDuringSchedulingIgnoredDuringExecution == null): true + (preferredDuringSchedulingIgnoredDuringExecution != null): true diff --git a/tests/e2e/affinity-collector/02-assert-statefulset-with-modified-affinity.yaml b/tests/e2e/affinity-collector/02-assert-statefulset-with-modified-affinity.yaml new file mode 100644 index 0000000000..530116b1fb --- /dev/null +++ b/tests/e2e/affinity-collector/02-assert-statefulset-with-modified-affinity.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + affinity: with +spec: + template: + spec: + affinity: + nodeAffinity: + (requiredDuringSchedulingIgnoredDuringExecution == null): true + (preferredDuringSchedulingIgnoredDuringExecution != null): true diff --git a/tests/e2e/affinity-collector/02-modify-collectors-affinity.yaml b/tests/e2e/affinity-collector/02-modify-collectors-affinity.yaml new file mode 100644 index 0000000000..dec25b413c --- /dev/null +++ b/tests/e2e/affinity-collector/02-modify-collectors-affinity.yaml @@ -0,0 +1,103 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + labels: + affinity: with +spec: + mode: deployment + affinity: + nodeAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + preference: + matchExpressions: + - key: kubernetes.io/os + operator: In + values: + - linux + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + labels: + affinity: with +spec: + mode: daemonset + affinity: + nodeAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + preference: + matchExpressions: + - key: kubernetes.io/os + operator: In + values: + - linux + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + labels: + affinity: with +spec: + mode: statefulset + affinity: + nodeAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + preference: + matchExpressions: + - key: kubernetes.io/os + operator: In + values: + - linux + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/affinity-collector/chainsaw-test.yaml b/tests/e2e/affinity-collector/chainsaw-test.yaml new file mode 100644 index 0000000000..84d36ad7ed --- /dev/null +++ b/tests/e2e/affinity-collector/chainsaw-test.yaml @@ -0,0 +1,66 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + name: affinity-collector +spec: + steps: + - name: step-00 + description: collectors without affinity + try: + - apply: + file: 00-install-collectors-without-affinity.yaml + # deployment + - assert: + file: 00-assert-deployment-without-affinity.yaml + # daemonset + - assert: + file: 00-assert-daemonset-without-affinity.yaml + # statefulset + - assert: + file: 00-assert-statefulset-without-affinity.yaml + + - name: step-01 + description: collectors with affinity + try: + - update: + file: 01-install-collectors-with-affinity.yaml + # deployment + - assert: + file: 01-assert-deployment-with-affinity.yaml + # daemonset + - assert: + file: 01-assert-daemonset-with-affinity.yaml + # statefulset + - assert: + file: 01-assert-statefulset-with-affinity.yaml + + - name: step-02 + description: modify affinity + try: + - update: + file: 02-modify-collectors-affinity.yaml + # deployment + - assert: + file: 02-assert-deployment-with-modified-affinity.yaml + # daemonset + - assert: + file: 02-assert-daemonset-with-modified-affinity.yaml + # statefulset + - assert: + file: 02-assert-statefulset-with-modified-affinity.yaml + + - name: step-03 + description: delete affinity + try: + - update: + file: 00-install-collectors-without-affinity.yaml + # deployment + - assert: + file: 00-assert-deployment-without-affinity.yaml + # daemonset + - assert: + file: 00-assert-daemonset-without-affinity.yaml + # statefulset + - assert: + file: 00-assert-statefulset-without-affinity.yaml diff --git a/tests/e2e/annotation-change-collector/00-assert-daemonset-with-extra-annotation.yaml b/tests/e2e/annotation-change-collector/00-assert-daemonset-with-extra-annotation.yaml new file mode 100644 index 0000000000..e33371e300 --- /dev/null +++ b/tests/e2e/annotation-change-collector/00-assert-daemonset-with-extra-annotation.yaml @@ -0,0 +1,11 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: daemonset-collector + annotations: + user-annotation: "existing" +spec: + template: + metadata: + annotations: + user-annotation: "existing" diff --git a/tests/e2e/annotation-change-collector/00-assert-deployment-with-extra-annotation.yaml b/tests/e2e/annotation-change-collector/00-assert-deployment-with-extra-annotation.yaml new file mode 100644 index 0000000000..e02ad87341 --- /dev/null +++ b/tests/e2e/annotation-change-collector/00-assert-deployment-with-extra-annotation.yaml @@ -0,0 +1,11 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: deployment-collector + annotations: + user-annotation: "existing" +spec: + template: + metadata: + annotations: + user-annotation: "existing" diff --git a/tests/e2e/annotation-change-collector/00-assert-statefulset-with-extra-annotation.yaml b/tests/e2e/annotation-change-collector/00-assert-statefulset-with-extra-annotation.yaml new file mode 100644 index 0000000000..b74f1945b0 --- /dev/null +++ b/tests/e2e/annotation-change-collector/00-assert-statefulset-with-extra-annotation.yaml @@ -0,0 +1,11 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: statefulset-collector + annotations: + user-annotation: "existing" +spec: + template: + metadata: + annotations: + user-annotation: "existing" diff --git a/tests/e2e/annotation-change-collector/00-install-collectors-with-extra-annotation.yaml b/tests/e2e/annotation-change-collector/00-install-collectors-with-extra-annotation.yaml new file mode 100644 index 0000000000..5e21322a3f --- /dev/null +++ b/tests/e2e/annotation-change-collector/00-install-collectors-with-extra-annotation.yaml @@ -0,0 +1,73 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + annotations: + user-annotation: "existing" +spec: + mode: deployment + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + annotations: + user-annotation: "existing" +spec: + mode: daemonset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + annotations: + user-annotation: "existing" +spec: + mode: statefulset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/annotation-change-collector/01-assert-daemonset-with-annotation-change.yaml b/tests/e2e/annotation-change-collector/01-assert-daemonset-with-annotation-change.yaml new file mode 100644 index 0000000000..522e45cfba --- /dev/null +++ b/tests/e2e/annotation-change-collector/01-assert-daemonset-with-annotation-change.yaml @@ -0,0 +1,13 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: daemonset-collector + annotations: + user-annotation: "modified" + new-annotation: "yes" +spec: + template: + metadata: + annotations: + user-annotation: "modified" + new-annotation: "yes" diff --git a/tests/e2e/annotation-change-collector/01-assert-deployment-with-annotation-change.yaml b/tests/e2e/annotation-change-collector/01-assert-deployment-with-annotation-change.yaml new file mode 100644 index 0000000000..86b0ac0e6f --- /dev/null +++ b/tests/e2e/annotation-change-collector/01-assert-deployment-with-annotation-change.yaml @@ -0,0 +1,13 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: deployment-collector + annotations: + user-annotation: "modified" + new-annotation: "yes" +spec: + template: + metadata: + annotations: + user-annotation: "modified" + new-annotation: "yes" diff --git a/tests/e2e/annotation-change-collector/01-assert-statefulset-with-annotation-change.yaml b/tests/e2e/annotation-change-collector/01-assert-statefulset-with-annotation-change.yaml new file mode 100644 index 0000000000..c0bc38d353 --- /dev/null +++ b/tests/e2e/annotation-change-collector/01-assert-statefulset-with-annotation-change.yaml @@ -0,0 +1,13 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: statefulset-collector + annotations: + user-annotation: "modified" + new-annotation: "yes" +spec: + template: + metadata: + annotations: + user-annotation: "modified" + new-annotation: "yes" diff --git a/tests/e2e/annotation-change-collector/01-install-collectors-with-annotation-change.yaml b/tests/e2e/annotation-change-collector/01-install-collectors-with-annotation-change.yaml new file mode 100644 index 0000000000..bd02ae5766 --- /dev/null +++ b/tests/e2e/annotation-change-collector/01-install-collectors-with-annotation-change.yaml @@ -0,0 +1,76 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + annotations: + user-annotation: "modified" + new-annotation: "yes" +spec: + mode: deployment + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + annotations: + user-annotation: "modified" + new-annotation: "yes" +spec: + mode: daemonset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + annotations: + user-annotation: "modified" + new-annotation: "yes" +spec: + mode: statefulset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/annotation-change-collector/02-assert-daemonset-without-extra-annotation.yaml b/tests/e2e/annotation-change-collector/02-assert-daemonset-without-extra-annotation.yaml new file mode 100644 index 0000000000..bfe14a3638 --- /dev/null +++ b/tests/e2e/annotation-change-collector/02-assert-daemonset-without-extra-annotation.yaml @@ -0,0 +1,15 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: daemonset-collector + (contains(keys(annotations), 'user-annotation')): true + (contains(keys(annotations), 'new-annotation')): true + annotations: + manual-annotation: "true" +spec: + template: + metadata: + (contains(keys(annotations), 'user-annotation')): true + (contains(keys(annotations), 'new-annotation')): true + annotations: + manual-annotation: "true" diff --git a/tests/e2e/annotation-change-collector/02-assert-deployment-without-extra-annotation.yaml b/tests/e2e/annotation-change-collector/02-assert-deployment-without-extra-annotation.yaml new file mode 100644 index 0000000000..b5c7232450 --- /dev/null +++ b/tests/e2e/annotation-change-collector/02-assert-deployment-without-extra-annotation.yaml @@ -0,0 +1,15 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: deployment-collector + (contains(keys(annotations), 'user-annotation')): true + (contains(keys(annotations), 'new-annotation')): true + annotations: + manual-annotation: "true" +spec: + template: + metadata: + (contains(keys(annotations), 'user-annotation')): true + (contains(keys(annotations), 'new-annotation')): true + annotations: + manual-annotation: "true" diff --git a/tests/e2e/annotation-change-collector/02-assert-statefulset-without-extra-annotation.yaml b/tests/e2e/annotation-change-collector/02-assert-statefulset-without-extra-annotation.yaml new file mode 100644 index 0000000000..8117b452f9 --- /dev/null +++ b/tests/e2e/annotation-change-collector/02-assert-statefulset-without-extra-annotation.yaml @@ -0,0 +1,15 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: statefulset-collector + (contains(keys(annotations), 'user-annotation')): true + (contains(keys(annotations), 'new-annotation')): true + annotations: + manual-annotation: "true" +spec: + template: + metadata: + (contains(keys(annotations), 'user-annotation')): true + (contains(keys(annotations), 'new-annotation')): true + annotations: + manual-annotation: "true" diff --git a/tests/e2e/annotation-change-collector/02-install-collectors-without-extra-annotation.yaml b/tests/e2e/annotation-change-collector/02-install-collectors-without-extra-annotation.yaml new file mode 100644 index 0000000000..4a50758f63 --- /dev/null +++ b/tests/e2e/annotation-change-collector/02-install-collectors-without-extra-annotation.yaml @@ -0,0 +1,67 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment +spec: + mode: deployment + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset +spec: + mode: daemonset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset +spec: + mode: statefulset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/annotation-change-collector/02-manual-annotation-resources.yaml b/tests/e2e/annotation-change-collector/02-manual-annotation-resources.yaml new file mode 100644 index 0000000000..e59bc43e11 --- /dev/null +++ b/tests/e2e/annotation-change-collector/02-manual-annotation-resources.yaml @@ -0,0 +1,35 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: daemonset-collector + annotations: + manual-annotation: "true" +spec: + template: + metadata: + annotations: + manual-annotation: "true" +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: deployment-collector + annotations: + manual-annotation: "true" +spec: + template: + metadata: + annotations: + manual-annotation: "true" +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: statefulset-collector + annotations: + manual-annotation: "true" +spec: + template: + metadata: + annotations: + manual-annotation: "true" diff --git a/tests/e2e/annotation-change-collector/chainsaw-test.yaml b/tests/e2e/annotation-change-collector/chainsaw-test.yaml new file mode 100644 index 0000000000..cf00491f49 --- /dev/null +++ b/tests/e2e/annotation-change-collector/chainsaw-test.yaml @@ -0,0 +1,53 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + name: annotation-change-collector +spec: + steps: + - name: step-00 + description: collectors with an extra annotation + try: + - apply: + file: 00-install-collectors-with-extra-annotation.yaml + # deployment + - assert: + file: 00-assert-deployment-with-extra-annotation.yaml + # daemonset + - assert: + file: 00-assert-daemonset-with-extra-annotation.yaml + # statefulset + - assert: + file: 00-assert-statefulset-with-extra-annotation.yaml + + - name: step-01 + description: collectors with changed extra annotations + try: + - update: + file: 01-install-collectors-with-annotation-change.yaml + # deployment + - assert: + file: 01-assert-deployment-with-annotation-change.yaml + # daemonset + - assert: + file: 01-assert-daemonset-with-annotation-change.yaml + # statefulset + - assert: + file: 01-assert-statefulset-with-annotation-change.yaml + + - name: step-02 + description: manually annotate resources and delete extra annotation from collector + try: + - apply: + file: 02-manual-annotation-resources.yaml + - update: + file: 02-install-collectors-without-extra-annotation.yaml + # deployment + - assert: + file: 02-assert-deployment-without-extra-annotation.yaml + # daemonset + - assert: + file: 02-assert-daemonset-without-extra-annotation.yaml + # statefulset + - assert: + file: 02-assert-statefulset-without-extra-annotation.yaml diff --git a/tests/e2e/args-collector/00-assert-daemonset-without-args.yaml b/tests/e2e/args-collector/00-assert-daemonset-without-args.yaml new file mode 100644 index 0000000000..1319f06b5f --- /dev/null +++ b/tests/e2e/args-collector/00-assert-daemonset-without-args.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + args: without +spec: + template: + spec: + ~.(containers): + name: otc-container + (contains(args, '--extra-arg=yes')): false + (contains(args, '--different-extra-arg=yes')): false diff --git a/tests/e2e/args-collector/00-assert-deployment-without-args.yaml b/tests/e2e/args-collector/00-assert-deployment-without-args.yaml new file mode 100644 index 0000000000..74dd0c998b --- /dev/null +++ b/tests/e2e/args-collector/00-assert-deployment-without-args.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + args: without +spec: + template: + spec: + ~.(containers): + name: otc-container + (contains(args, '--extra-arg=yes')): false + (contains(args, '--different-extra-arg=yes')): false diff --git a/tests/e2e/args-collector/00-assert-statefulset-without-args.yaml b/tests/e2e/args-collector/00-assert-statefulset-without-args.yaml new file mode 100644 index 0000000000..70a30e913b --- /dev/null +++ b/tests/e2e/args-collector/00-assert-statefulset-without-args.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + args: without +spec: + template: + spec: + ~.(containers): + name: otc-container + (contains(args, '--extra-arg=yes')): false + (contains(args, '--different-extra-arg=yes')): false diff --git a/tests/e2e/args-collector/00-install-collectors-without-args.yaml b/tests/e2e/args-collector/00-install-collectors-without-args.yaml new file mode 100644 index 0000000000..5073aaef66 --- /dev/null +++ b/tests/e2e/args-collector/00-install-collectors-without-args.yaml @@ -0,0 +1,73 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + labels: + args: without +spec: + mode: deployment + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + labels: + args: without +spec: + mode: daemonset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + labels: + args: without +spec: + mode: statefulset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/args-collector/01-assert-daemonset-with-args.yaml b/tests/e2e/args-collector/01-assert-daemonset-with-args.yaml new file mode 100644 index 0000000000..0177692267 --- /dev/null +++ b/tests/e2e/args-collector/01-assert-daemonset-with-args.yaml @@ -0,0 +1,15 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + args: with +spec: + template: + spec: + ~.(containers): + name: otc-container + (contains(args, '--extra-arg=yes')): true diff --git a/tests/e2e/args-collector/01-assert-deployment-with-args.yaml b/tests/e2e/args-collector/01-assert-deployment-with-args.yaml new file mode 100644 index 0000000000..c22de26d67 --- /dev/null +++ b/tests/e2e/args-collector/01-assert-deployment-with-args.yaml @@ -0,0 +1,15 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + args: with +spec: + template: + spec: + ~.(containers): + name: otc-container + (contains(args, '--extra-arg=yes')): true diff --git a/tests/e2e/args-collector/01-assert-statefulset-with-args.yaml b/tests/e2e/args-collector/01-assert-statefulset-with-args.yaml new file mode 100644 index 0000000000..3afba4be8e --- /dev/null +++ b/tests/e2e/args-collector/01-assert-statefulset-with-args.yaml @@ -0,0 +1,15 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + args: with +spec: + template: + spec: + ~.(containers): + name: otc-container + (contains(args, '--extra-arg=yes')): true diff --git a/tests/e2e/args-collector/01-install-collectors-with-args.yaml b/tests/e2e/args-collector/01-install-collectors-with-args.yaml new file mode 100644 index 0000000000..8c6a03ab5e --- /dev/null +++ b/tests/e2e/args-collector/01-install-collectors-with-args.yaml @@ -0,0 +1,79 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + labels: + args: with +spec: + mode: deployment + args: + extra-arg: "yes" + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + labels: + args: with +spec: + mode: daemonset + args: + extra-arg: "yes" + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + labels: + args: with +spec: + mode: statefulset + args: + extra-arg: "yes" + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/args-collector/02-assert-daemonset-with-modified-args.yaml b/tests/e2e/args-collector/02-assert-daemonset-with-modified-args.yaml new file mode 100644 index 0000000000..3246c130db --- /dev/null +++ b/tests/e2e/args-collector/02-assert-daemonset-with-modified-args.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + args: with +spec: + template: + spec: + ~.(containers): + name: otc-container + (contains(args, '--extra-arg=yes')): false + (contains(args, '--different-extra-arg=yes')): true diff --git a/tests/e2e/args-collector/02-assert-deployment-with-modified-args.yaml b/tests/e2e/args-collector/02-assert-deployment-with-modified-args.yaml new file mode 100644 index 0000000000..4eedab6b04 --- /dev/null +++ b/tests/e2e/args-collector/02-assert-deployment-with-modified-args.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + args: with +spec: + template: + spec: + ~.(containers): + name: otc-container + (contains(args, '--extra-arg=yes')): false + (contains(args, '--different-extra-arg=yes')): true diff --git a/tests/e2e/args-collector/02-assert-statefulset-with-modified-args.yaml b/tests/e2e/args-collector/02-assert-statefulset-with-modified-args.yaml new file mode 100644 index 0000000000..6ed1dc4461 --- /dev/null +++ b/tests/e2e/args-collector/02-assert-statefulset-with-modified-args.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + args: with +spec: + template: + spec: + ~.(containers): + name: otc-container + (contains(args, '--extra-arg=yes')): false + (contains(args, '--different-extra-arg=yes')): true diff --git a/tests/e2e/args-collector/02-modify-collectors-args.yaml b/tests/e2e/args-collector/02-modify-collectors-args.yaml new file mode 100644 index 0000000000..2d43e6c1f9 --- /dev/null +++ b/tests/e2e/args-collector/02-modify-collectors-args.yaml @@ -0,0 +1,79 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + labels: + args: with +spec: + mode: deployment + args: + different-extra-arg: "yes" + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + labels: + args: with +spec: + mode: daemonset + args: + different-extra-arg: "yes" + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + labels: + args: with +spec: + mode: statefulset + args: + different-extra-arg: "yes" + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/args-collector/chainsaw-test.yaml b/tests/e2e/args-collector/chainsaw-test.yaml new file mode 100644 index 0000000000..2f3d6a7311 --- /dev/null +++ b/tests/e2e/args-collector/chainsaw-test.yaml @@ -0,0 +1,66 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + name: args-collector +spec: + steps: + - name: step-00 + description: collectors without args + try: + - apply: + file: 00-install-collectors-without-args.yaml + # deployment + - assert: + file: 00-assert-deployment-without-args.yaml + # daemonset + - assert: + file: 00-assert-daemonset-without-args.yaml + # statefulset + - assert: + file: 00-assert-statefulset-without-args.yaml + + - name: step-01 + description: collectors with args + try: + - update: + file: 01-install-collectors-with-args.yaml + # deployment + - assert: + file: 01-assert-deployment-with-args.yaml + # daemonset + - assert: + file: 01-assert-daemonset-with-args.yaml + # statefulset + - assert: + file: 01-assert-statefulset-with-args.yaml + + - name: step-02 + description: modify args + try: + - update: + file: 02-modify-collectors-args.yaml + # deployment + - assert: + file: 02-assert-deployment-with-modified-args.yaml + # daemonset + - assert: + file: 02-assert-daemonset-with-modified-args.yaml + # statefulset + - assert: + file: 02-assert-statefulset-with-modified-args.yaml + + - name: step-03 + description: delete args + try: + - update: + file: 00-install-collectors-without-args.yaml + # deployment + - assert: + file: 00-assert-deployment-without-args.yaml + # daemonset + - assert: + file: 00-assert-daemonset-without-args.yaml + # statefulset + - assert: + file: 00-assert-statefulset-without-args.yaml diff --git a/tests/e2e/extension/00-assert.yaml b/tests/e2e/extension/00-assert.yaml new file mode 100644 index 0000000000..c62406a1f3 --- /dev/null +++ b/tests/e2e/extension/00-assert.yaml @@ -0,0 +1,140 @@ +apiVersion: v1 +items: +- apiVersion: apps/v1 + kind: Deployment + metadata: + name: jaeger-inmemory-collector + spec: + template: + spec: + containers: + - ports: + - containerPort: 16686 + name: jaeger-query + protocol: TCP + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP +kind: List +metadata: + resourceVersion: "" +--- +apiVersion: v1 +kind: Service +metadata: + name: jaeger-inmemory-collector +spec: + ports: + - appProtocol: grpc + name: otlp-grpc + port: 4317 + protocol: TCP + targetPort: 4317 + - appProtocol: http + name: otlp-http + port: 4318 + protocol: TCP + targetPort: 4318 +--- +apiVersion: v1 +kind: Service +metadata: + annotations: + service.beta.openshift.io/serving-cert-secret-name: jaeger-inmemory-collector-headless-tls + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: jaeger-inmemory-collector + app.kubernetes.io/part-of: opentelemetry + app.kubernetes.io/version: latest + operator.opentelemetry.io/collector-headless-service: Exists + operator.opentelemetry.io/collector-service-type: headless + name: jaeger-inmemory-collector-headless + ownerReferences: + - apiVersion: opentelemetry.io/v1beta1 + blockOwnerDeletion: true + controller: true + kind: OpenTelemetryCollector + name: jaeger-inmemory +spec: + clusterIP: None + clusterIPs: + - None + internalTrafficPolicy: Cluster + ipFamilies: + - IPv4 + ipFamilyPolicy: SingleStack + ports: + - appProtocol: grpc + name: otlp-grpc + port: 4317 + protocol: TCP + targetPort: 4317 + - appProtocol: http + name: otlp-http + port: 4318 + protocol: TCP + targetPort: 4318 + selector: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/part-of: opentelemetry + sessionAffinity: None + type: ClusterIP +status: + loadBalancer: {} +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: jaeger-inmemory-collector-monitoring + app.kubernetes.io/part-of: opentelemetry + app.kubernetes.io/version: latest + operator.opentelemetry.io/collector-monitoring-service: Exists + operator.opentelemetry.io/collector-service-type: monitoring + name: jaeger-inmemory-collector-monitoring +spec: + ports: + - name: monitoring + port: 8888 + protocol: TCP + targetPort: 8888 + selector: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/part-of: opentelemetry + sessionAffinity: None + type: ClusterIP +status: + loadBalancer: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: jaeger-inmemory-collector-extension + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/part-of: opentelemetry + app.kubernetes.io/version: latest + operator.opentelemetry.io/collector-service-type: extension +spec: + selector: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/part-of: opentelemetry + ports: + - name: jaeger-query + port: 16686 + targetPort: 16686 +status: + loadBalancer: {} diff --git a/tests/e2e/extension/00-install.yaml b/tests/e2e/extension/00-install.yaml new file mode 100644 index 0000000000..43e27fa9b2 --- /dev/null +++ b/tests/e2e/extension/00-install.yaml @@ -0,0 +1,30 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: jaeger-inmemory +spec: + image: jaegertracing/jaeger:latest + config: + service: + extensions: [jaeger_storage, jaeger_query] + pipelines: + traces: + receivers: [otlp] + exporters: [jaeger_storage_exporter] + extensions: + jaeger_query: + storage: + traces: memstore + jaeger_storage: + backends: + memstore: + memory: + max_traces: 100000 + receivers: + otlp: + protocols: + grpc: + http: + exporters: + jaeger_storage_exporter: + trace_storage: memstore diff --git a/tests/e2e/extension/chainsaw-test.yaml b/tests/e2e/extension/chainsaw-test.yaml new file mode 100644 index 0000000000..488a76359b --- /dev/null +++ b/tests/e2e/extension/chainsaw-test.yaml @@ -0,0 +1,14 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + creationTimestamp: null + name: extension-test +spec: + steps: + - name: step-00 + try: + - apply: + file: 00-install.yaml + - assert: + file: 00-assert.yaml diff --git a/tests/e2e/label-change-collector/00-assert-daemonset-with-extra-label.yaml b/tests/e2e/label-change-collector/00-assert-daemonset-with-extra-label.yaml new file mode 100644 index 0000000000..08e3c0661f --- /dev/null +++ b/tests/e2e/label-change-collector/00-assert-daemonset-with-extra-label.yaml @@ -0,0 +1,14 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + user-label: "existing" +spec: + template: + metadata: + labels: + user-label: "existing" diff --git a/tests/e2e/label-change-collector/00-assert-deployment-with-extra-label.yaml b/tests/e2e/label-change-collector/00-assert-deployment-with-extra-label.yaml new file mode 100644 index 0000000000..91f64baf7e --- /dev/null +++ b/tests/e2e/label-change-collector/00-assert-deployment-with-extra-label.yaml @@ -0,0 +1,14 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + user-label: "existing" +spec: + template: + metadata: + labels: + user-label: "existing" diff --git a/tests/e2e/label-change-collector/00-assert-statefulset-with-extra-label.yaml b/tests/e2e/label-change-collector/00-assert-statefulset-with-extra-label.yaml new file mode 100644 index 0000000000..575d27cd50 --- /dev/null +++ b/tests/e2e/label-change-collector/00-assert-statefulset-with-extra-label.yaml @@ -0,0 +1,14 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + user-label: "existing" +spec: + template: + metadata: + labels: + user-label: "existing" diff --git a/tests/e2e/label-change-collector/00-install-collectors-with-extra-label.yaml b/tests/e2e/label-change-collector/00-install-collectors-with-extra-label.yaml new file mode 100644 index 0000000000..27fe143df5 --- /dev/null +++ b/tests/e2e/label-change-collector/00-install-collectors-with-extra-label.yaml @@ -0,0 +1,73 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + labels: + user-label: "existing" +spec: + mode: deployment + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + labels: + user-label: "existing" +spec: + mode: daemonset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + labels: + user-label: "existing" +spec: + mode: statefulset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/label-change-collector/01-assert-daemonset-with-label-change.yaml b/tests/e2e/label-change-collector/01-assert-daemonset-with-label-change.yaml new file mode 100644 index 0000000000..770106939d --- /dev/null +++ b/tests/e2e/label-change-collector/01-assert-daemonset-with-label-change.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + user-label: "modified" + new-label: "yes" +spec: + template: + metadata: + labels: + user-label: "modified" + new-label: "yes" diff --git a/tests/e2e/label-change-collector/01-assert-deployment-with-label-change.yaml b/tests/e2e/label-change-collector/01-assert-deployment-with-label-change.yaml new file mode 100644 index 0000000000..f694609ef1 --- /dev/null +++ b/tests/e2e/label-change-collector/01-assert-deployment-with-label-change.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + user-label: "modified" + new-label: "yes" +spec: + template: + metadata: + labels: + user-label: "modified" + new-label: "yes" diff --git a/tests/e2e/label-change-collector/01-assert-statefulset-with-label-change.yaml b/tests/e2e/label-change-collector/01-assert-statefulset-with-label-change.yaml new file mode 100644 index 0000000000..64a857d051 --- /dev/null +++ b/tests/e2e/label-change-collector/01-assert-statefulset-with-label-change.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + user-label: "modified" + new-label: "yes" +spec: + template: + metadata: + labels: + user-label: "modified" + new-label: "yes" diff --git a/tests/e2e/label-change-collector/01-install-collectors-with-label-change.yaml b/tests/e2e/label-change-collector/01-install-collectors-with-label-change.yaml new file mode 100644 index 0000000000..46ca873c80 --- /dev/null +++ b/tests/e2e/label-change-collector/01-install-collectors-with-label-change.yaml @@ -0,0 +1,76 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment + labels: + user-label: "modified" + new-label: "yes" +spec: + mode: deployment + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset + labels: + user-label: "modified" + new-label: "yes" +spec: + mode: daemonset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset + labels: + user-label: "modified" + new-label: "yes" +spec: + mode: statefulset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/label-change-collector/02-assert-daemonset-without-extra-label.yaml b/tests/e2e/label-change-collector/02-assert-daemonset-without-extra-label.yaml new file mode 100644 index 0000000000..4e7086d4eb --- /dev/null +++ b/tests/e2e/label-change-collector/02-assert-daemonset-without-extra-label.yaml @@ -0,0 +1,18 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + (contains(keys(labels),'user-label')): true + (contains(keys(labels),'new-label')): true + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: daemonset-collector + app.kubernetes.io/part-of: opentelemetry + manual-label: "true" +spec: + template: + metadata: + (contains(keys(labels),'user-label')): true + (contains(keys(labels),'new-label')): true + labels: + manual-label: "true" diff --git a/tests/e2e/label-change-collector/02-assert-deployment-without-extra-label.yaml b/tests/e2e/label-change-collector/02-assert-deployment-without-extra-label.yaml new file mode 100644 index 0000000000..cfad6a1965 --- /dev/null +++ b/tests/e2e/label-change-collector/02-assert-deployment-without-extra-label.yaml @@ -0,0 +1,18 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + (contains(keys(labels),'user-label')): true + (contains(keys(labels),'new-label')): true + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: deployment-collector + app.kubernetes.io/part-of: opentelemetry + manual-label: "true" +spec: + template: + metadata: + (contains(keys(labels),'user-label')): true + (contains(keys(labels),'new-label')): true + labels: + manual-label: "true" diff --git a/tests/e2e/label-change-collector/02-assert-statefulset-without-extra-label.yaml b/tests/e2e/label-change-collector/02-assert-statefulset-without-extra-label.yaml new file mode 100644 index 0000000000..72f24ab10e --- /dev/null +++ b/tests/e2e/label-change-collector/02-assert-statefulset-without-extra-label.yaml @@ -0,0 +1,18 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + (contains(keys(labels),'user-label')): true + (contains(keys(labels),'new-label')): true + labels: + app.kubernetes.io/component: opentelemetry-collector + app.kubernetes.io/managed-by: opentelemetry-operator + app.kubernetes.io/name: statefulset-collector + app.kubernetes.io/part-of: opentelemetry + manual-label: "true" +spec: + template: + metadata: + (contains(keys(labels),'user-label')): true + (contains(keys(labels),'new-label')): true + labels: + manual-label: "true" diff --git a/tests/e2e/label-change-collector/02-install-collectors-without-extra-label.yaml b/tests/e2e/label-change-collector/02-install-collectors-without-extra-label.yaml new file mode 100644 index 0000000000..4a50758f63 --- /dev/null +++ b/tests/e2e/label-change-collector/02-install-collectors-without-extra-label.yaml @@ -0,0 +1,67 @@ +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: deployment +spec: + mode: deployment + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: daemonset +spec: + mode: daemonset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] + +--- +apiVersion: opentelemetry.io/v1beta1 +kind: OpenTelemetryCollector +metadata: + name: statefulset +spec: + mode: statefulset + config: + receivers: + otlp: + protocols: + grpc: {} + processors: {} + + exporters: + debug: {} + + service: + pipelines: + traces: + receivers: [otlp] + exporters: [debug] diff --git a/tests/e2e/label-change-collector/02-manual-labeling-resources.yaml b/tests/e2e/label-change-collector/02-manual-labeling-resources.yaml new file mode 100644 index 0000000000..637bd44009 --- /dev/null +++ b/tests/e2e/label-change-collector/02-manual-labeling-resources.yaml @@ -0,0 +1,35 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: daemonset-collector + labels: + manual-label: "true" +spec: + template: + metadata: + labels: + manual-label: "true" +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: deployment-collector + labels: + manual-label: "true" +spec: + template: + metadata: + labels: + manual-label: "true" +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: statefulset-collector + labels: + manual-label: "true" +spec: + template: + metadata: + labels: + manual-label: "true" diff --git a/tests/e2e/label-change-collector/chainsaw-test.yaml b/tests/e2e/label-change-collector/chainsaw-test.yaml new file mode 100644 index 0000000000..25542d4ce2 --- /dev/null +++ b/tests/e2e/label-change-collector/chainsaw-test.yaml @@ -0,0 +1,53 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + name: label-change-collector +spec: + steps: + - name: step-00 + description: collectors with an extra label + try: + - apply: + file: 00-install-collectors-with-extra-label.yaml + # deployment + - assert: + file: 00-assert-deployment-with-extra-label.yaml + # daemonset + - assert: + file: 00-assert-daemonset-with-extra-label.yaml + # statefulset + - assert: + file: 00-assert-statefulset-with-extra-label.yaml + + - name: step-01 + description: collectors with changed extra labels + try: + - update: + file: 01-install-collectors-with-label-change.yaml + # deployment + - assert: + file: 01-assert-deployment-with-label-change.yaml + # daemonset + - assert: + file: 01-assert-daemonset-with-label-change.yaml + # statefulset + - assert: + file: 01-assert-statefulset-with-label-change.yaml + + - name: step-02 + description: delete extra label from collector + try: + - apply: + file: 02-manual-labeling-resources.yaml + - update: + file: 02-install-collectors-without-extra-label.yaml + # deployment + - assert: + file: 02-assert-deployment-without-extra-label.yaml + # daemonset + - assert: + file: 02-assert-daemonset-without-extra-label.yaml + # statefulset + - assert: + file: 02-assert-statefulset-without-extra-label.yaml diff --git a/tests/e2e/managed-reconcile/02-assert.yaml b/tests/e2e/managed-reconcile/02-assert.yaml index 0a8f5c29bf..e9bb69b67d 100644 --- a/tests/e2e/managed-reconcile/02-assert.yaml +++ b/tests/e2e/managed-reconcile/02-assert.yaml @@ -52,7 +52,7 @@ spec: apiVersion: v1 kind: ConfigMap metadata: - name: simplest-collector-a85e451c + name: simplest-collector-aec5aa11 data: collector.yaml: | receivers: @@ -65,6 +65,9 @@ data: exporters: debug: null service: + telemetry: + metrics: + address: 0.0.0.0:8888 pipelines: traces: exporters: diff --git a/tests/e2e/multiple-configmaps/00-assert.yaml b/tests/e2e/multiple-configmaps/00-assert.yaml index 54fca05399..8ff6b44ab5 100644 --- a/tests/e2e/multiple-configmaps/00-assert.yaml +++ b/tests/e2e/multiple-configmaps/00-assert.yaml @@ -25,7 +25,7 @@ spec: volumes: - name: otc-internal configMap: - name: simplest-with-configmaps-collector-a85e451c + name: simplest-with-configmaps-collector-aec5aa11 items: - key: collector.yaml path: collector.yaml diff --git a/tests/e2e/node-selector-collector/00-install-collectors-without-node-selector.yaml b/tests/e2e/node-selector-collector/00-install-collectors-without-node-selector.yaml index b4e3044ef0..baad346f14 100644 --- a/tests/e2e/node-selector-collector/00-install-collectors-without-node-selector.yaml +++ b/tests/e2e/node-selector-collector/00-install-collectors-without-node-selector.yaml @@ -1,4 +1,4 @@ -apiVersion: opentelemetry.io/v1alpha1 +apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: deployment @@ -7,16 +7,16 @@ metadata: spec: mode: deployment nodeSelector: - config: | + config: receivers: otlp: protocols: - grpc: - http: - processors: + grpc: {} + http: {} + processors: {} exporters: - debug: + debug: {} service: pipelines: @@ -25,7 +25,7 @@ spec: exporters: [debug] --- -apiVersion: opentelemetry.io/v1alpha1 +apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: daemonset @@ -34,16 +34,16 @@ metadata: spec: mode: daemonset nodeSelector: - config: | + config: receivers: otlp: protocols: - grpc: - http: - processors: + grpc: {} + http: {} + processors: {} exporters: - debug: + debug: {} service: pipelines: @@ -52,7 +52,7 @@ spec: exporters: [debug] --- -apiVersion: opentelemetry.io/v1alpha1 +apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: statefulset @@ -61,19 +61,19 @@ metadata: spec: mode: statefulset nodeSelector: - config: | + config: receivers: otlp: protocols: - grpc: - http: - processors: + grpc: {} + http: {} + processors: {} exporters: - debug: + debug: {} service: pipelines: traces: receivers: [otlp] - exporters: [debug] \ No newline at end of file + exporters: [debug] diff --git a/tests/e2e/node-selector-collector/01-install-collectors-with-node-selector.yaml b/tests/e2e/node-selector-collector/01-install-collectors-with-node-selector.yaml index b03ae2569e..f83f56eb2c 100644 --- a/tests/e2e/node-selector-collector/01-install-collectors-with-node-selector.yaml +++ b/tests/e2e/node-selector-collector/01-install-collectors-with-node-selector.yaml @@ -1,4 +1,4 @@ -apiVersion: opentelemetry.io/v1alpha1 +apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: deployment @@ -8,16 +8,16 @@ spec: mode: deployment nodeSelector: kubernetes.io/os: linux - config: | + config: receivers: otlp: protocols: - grpc: - http: - processors: + grpc: {} + http: {} + processors: {} exporters: - debug: + debug: {} service: pipelines: @@ -26,7 +26,7 @@ spec: exporters: [debug] --- -apiVersion: opentelemetry.io/v1alpha1 +apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: daemonset @@ -36,16 +36,16 @@ spec: mode: daemonset nodeSelector: kubernetes.io/os: linux - config: | + config: receivers: otlp: protocols: - grpc: - http: - processors: + grpc: {} + http: {} + processors: {} exporters: - debug: + debug: {} service: pipelines: @@ -54,7 +54,7 @@ spec: exporters: [debug] --- -apiVersion: opentelemetry.io/v1alpha1 +apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: statefulset @@ -64,19 +64,19 @@ spec: mode: statefulset nodeSelector: kubernetes.io/os: linux - config: | + config: receivers: otlp: protocols: - grpc: - http: - processors: + grpc: {} + http: {} + processors: {} exporters: - debug: + debug: {} service: pipelines: traces: receivers: [otlp] - exporters: [debug] \ No newline at end of file + exporters: [debug] diff --git a/tests/e2e/operator-restart/assert-operator-pod.yaml b/tests/e2e/operator-restart/assert-operator-pod.yaml new file mode 100644 index 0000000000..d8131db398 --- /dev/null +++ b/tests/e2e/operator-restart/assert-operator-pod.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +kind: Pod +metadata: + labels: + app.kubernetes.io/name: opentelemetry-operator + control-plane: controller-manager + namespace: ($OTEL_NAMESPACE) +status: + containerStatuses: + - name: kube-rbac-proxy + ready: true + started: true + - name: manager + ready: true + started: true + phase: Running diff --git a/tests/e2e/operator-restart/chainsaw-test.yaml b/tests/e2e/operator-restart/chainsaw-test.yaml new file mode 100644 index 0000000000..d5081d4fef --- /dev/null +++ b/tests/e2e/operator-restart/chainsaw-test.yaml @@ -0,0 +1,36 @@ +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + name: operator-restart +spec: + # Running the test serially as its disruptive causing operator pod restart + concurrent: false + steps: + - name: Delete operator pod + try: + - command: + entrypoint: kubectl + args: + - get + - pods + - -A + - -l control-plane=controller-manager + - -l app.kubernetes.io/name=opentelemetry-operator + - -o + - jsonpath={.items[0].metadata.namespace} + outputs: + - name: OTEL_NAMESPACE + value: ($stdout) + - delete: + ref: + apiVersion: v1 + kind: Pod + namespace: ($OTEL_NAMESPACE) + labels: + control-plane: controller-manager + app.kubernetes.io/name: opentelemetry-operator + # Adding 10s sleep here cause sometimes the pod will be in running state for a while but can fail later if there is any issue with the component startup. + - sleep: + duration: 10s + - assert: + file: assert-operator-pod.yaml \ No newline at end of file diff --git a/tests/e2e/smoke-targetallocator/00-assert.yaml b/tests/e2e/smoke-targetallocator/00-assert.yaml index aa86ab8094..1ba3d195e3 100644 --- a/tests/e2e/smoke-targetallocator/00-assert.yaml +++ b/tests/e2e/smoke-targetallocator/00-assert.yaml @@ -50,6 +50,9 @@ data: - debug receivers: - jaeger + telemetry: + metrics: + address: 0.0.0.0:8888 kind: ConfigMap metadata: - name: stateful-collector-57180221 + name: stateful-collector-7a42612e diff --git a/tests/e2e/statefulset-features/00-assert.yaml b/tests/e2e/statefulset-features/00-assert.yaml index b80a130bf6..b4e2d060b8 100644 --- a/tests/e2e/statefulset-features/00-assert.yaml +++ b/tests/e2e/statefulset-features/00-assert.yaml @@ -20,7 +20,7 @@ spec: items: - key: collector.yaml path: collector.yaml - name: stateful-collector-4b08af22 + name: stateful-collector-52b86f05 name: otc-internal - emptyDir: {} name: testvolume diff --git a/tests/e2e/statefulset-features/01-assert.yaml b/tests/e2e/statefulset-features/01-assert.yaml index 45584c25f3..9630e500d6 100644 --- a/tests/e2e/statefulset-features/01-assert.yaml +++ b/tests/e2e/statefulset-features/01-assert.yaml @@ -20,7 +20,7 @@ spec: items: - key: collector.yaml path: collector.yaml - name: stateful-collector-4b08af22 + name: stateful-collector-52b86f05 name: otc-internal - emptyDir: {} name: testvolume diff --git a/tests/e2e/versioned-configmaps/00-assert.yaml b/tests/e2e/versioned-configmaps/00-assert.yaml index a1b499db1f..d0fcfd2a28 100644 --- a/tests/e2e/versioned-configmaps/00-assert.yaml +++ b/tests/e2e/versioned-configmaps/00-assert.yaml @@ -9,11 +9,11 @@ spec: volumes: - name: otc-internal configMap: - name: simple-collector-bf36603a + name: simple-collector-de9b8847 status: readyReplicas: 1 --- apiVersion: v1 kind: ConfigMap metadata: - name: simple-collector-bf36603a + name: simple-collector-de9b8847 diff --git a/tests/e2e/versioned-configmaps/01-assert.yaml b/tests/e2e/versioned-configmaps/01-assert.yaml index 169568e53a..1e291a0cb2 100644 --- a/tests/e2e/versioned-configmaps/01-assert.yaml +++ b/tests/e2e/versioned-configmaps/01-assert.yaml @@ -9,16 +9,16 @@ spec: volumes: - name: otc-internal configMap: - name: simple-collector-024c6417 + name: simple-collector-3f453d89 status: readyReplicas: 1 --- apiVersion: v1 kind: ConfigMap metadata: - name: simple-collector-024c6417 + name: simple-collector-3f453d89 --- apiVersion: v1 kind: ConfigMap metadata: - name: simple-collector-bf36603a + name: simple-collector-de9b8847 diff --git a/tests/e2e/volume-claim-label/00-assert.yaml b/tests/e2e/volume-claim-label/00-assert.yaml new file mode 100644 index 0000000000..471bb7ede9 --- /dev/null +++ b/tests/e2e/volume-claim-label/00-assert.yaml @@ -0,0 +1,42 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: stateful-collector +spec: + podManagementPolicy: Parallel + template: + spec: + containers: + - args: + - --config=/conf/collector.yaml + name: otc-container + volumeMounts: + - mountPath: /conf + name: otc-internal + - mountPath: /usr/share/testvolume + name: testvolume + volumes: + - configMap: + items: + - key: collector.yaml + path: collector.yaml + name: otc-internal + - emptyDir: {} + name: testvolume + volumeClaimTemplates: + - apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: testvolume + labels: + test: "true" + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + volumeMode: Filesystem +status: + replicas: 3 + readyReplicas: 3 diff --git a/tests/e2e/volume-claim-label/00-install.yaml b/tests/e2e/volume-claim-label/00-install.yaml new file mode 100644 index 0000000000..07f49c18c4 --- /dev/null +++ b/tests/e2e/volume-claim-label/00-install.yaml @@ -0,0 +1,35 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: stateful +spec: + mode: statefulset + replicas: 3 + volumes: + - name: testvolume + volumeMounts: + - name: testvolume + mountPath: /usr/share/testvolume + volumeClaimTemplates: + - metadata: + name: testvolume + labels: + test: "true" + spec: + accessModes: [ "ReadWriteOnce" ] + resources: + requests: + storage: 1Gi + config: | + receivers: + jaeger: + protocols: + grpc: + processors: + exporters: + debug: + service: + pipelines: + traces: + receivers: [jaeger] + exporters: [debug] diff --git a/tests/e2e/volume-claim-label/01-assert.yaml b/tests/e2e/volume-claim-label/01-assert.yaml new file mode 100644 index 0000000000..438efa163f --- /dev/null +++ b/tests/e2e/volume-claim-label/01-assert.yaml @@ -0,0 +1,42 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: stateful-collector +spec: + podManagementPolicy: Parallel + template: + spec: + containers: + - args: + - --config=/conf/collector.yaml + name: otc-container + volumeMounts: + - mountPath: /conf + name: otc-internal + - mountPath: /usr/share/testvolume + name: testvolume + volumes: + - configMap: + items: + - key: collector.yaml + path: collector.yaml + name: otc-internal + - emptyDir: {} + name: testvolume + volumeClaimTemplates: + - apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: testvolume + labels: + test: "updated" + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + volumeMode: Filesystem +status: + replicas: 3 + readyReplicas: 3 diff --git a/tests/e2e/volume-claim-label/01-update-volume-claim-template-labels.yaml b/tests/e2e/volume-claim-label/01-update-volume-claim-template-labels.yaml new file mode 100644 index 0000000000..5b1b68ea33 --- /dev/null +++ b/tests/e2e/volume-claim-label/01-update-volume-claim-template-labels.yaml @@ -0,0 +1,35 @@ +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: stateful +spec: + mode: statefulset + replicas: 3 + volumes: + - name: testvolume + volumeMounts: + - name: testvolume + mountPath: /usr/share/testvolume + volumeClaimTemplates: + - metadata: + name: testvolume + labels: + test: "updated" + spec: + accessModes: [ "ReadWriteOnce" ] + resources: + requests: + storage: 1Gi + config: | + receivers: + jaeger: + protocols: + grpc: + processors: + exporters: + debug: + service: + pipelines: + traces: + receivers: [jaeger] + exporters: [debug] diff --git a/tests/e2e/volume-claim-label/chainsaw-test.yaml b/tests/e2e/volume-claim-label/chainsaw-test.yaml new file mode 100755 index 0000000000..e079f6a8ae --- /dev/null +++ b/tests/e2e/volume-claim-label/chainsaw-test.yaml @@ -0,0 +1,20 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json +apiVersion: chainsaw.kyverno.io/v1alpha1 +kind: Test +metadata: + creationTimestamp: null + name: persistent-volume-claim-label +spec: + steps: + - name: step-00 + try: + - apply: + file: 00-install.yaml + - assert: + file: 00-assert.yaml + - name: step-01 + try: + - apply: + file: 01-update-volume-claim-template-labels.yaml + - assert: + file: 01-assert.yaml diff --git a/versions.txt b/versions.txt index 5ae4d22241..0820a83d2a 100644 --- a/versions.txt +++ b/versions.txt @@ -2,16 +2,16 @@ # by default with the OpenTelemetry Operator. This would usually be the latest # stable OpenTelemetry version. When you update this file, make sure to update the # the docs as well. -opentelemetry-collector=0.109.0 +opentelemetry-collector=0.114.0 # Represents the current release of the OpenTelemetry Operator. -operator=0.109.0 +operator=0.114.0 # Represents the current release of the Target Allocator. -targetallocator=0.109.0 +targetallocator=0.114.0 # Represents the current release of the Operator OpAMP Bridge. -operator-opamp-bridge=0.109.0 +operator-opamp-bridge=0.114.0 # Represents the current release of Java instrumentation. # Should match autoinstrumentation/java/version.txt @@ -19,7 +19,7 @@ autoinstrumentation-java=1.33.5 # Represents the current release of NodeJS instrumentation. # Should match value in autoinstrumentation/nodejs/package.json -autoinstrumentation-nodejs=0.52.1 +autoinstrumentation-nodejs=0.53.0 # Represents the current release of Python instrumentation. # Should match value in autoinstrumentation/python/requirements.txt @@ -30,7 +30,7 @@ autoinstrumentation-python=0.48b0 autoinstrumentation-dotnet=1.2.0 # Represents the current release of Go instrumentation. -autoinstrumentation-go=v0.14.0-alpha +autoinstrumentation-go=v0.17.0-alpha # Represents the current release of Apache HTTPD instrumentation. # Should match autoinstrumentation/apache-httpd/version.txt