You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Logs from pod do not show any issues. I tried changing the image from grafana/k6-operator:controller-v0.0.18 to grafana/k6-operator:latest and still not having any luck
Defaulted container "manager" out of: manager, kube-rbac-proxy
2024-11-26T14:16:08Z INFO setup starting manager
2024-11-26T14:16:08Z INFO controller-runtime.metrics Starting metrics server
2024-11-26T14:16:08Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": "127.0.0.1:8080", "secure": false}
2024-11-26T14:16:08Z INFO Starting EventSource {"controller": "k6", "controllerGroup": "k6.io", "controllerKind": "K6", "source": "kind source: *v1alpha1.K6"}
2024-11-26T14:16:08Z INFO Starting EventSource {"controller": "testrun", "controllerGroup": "k6.io", "controllerKind": "TestRun", "source": "kind source: *v1alpha1.TestRun"}
2024-11-26T14:16:08Z INFO Starting EventSource {"controller": "k6", "controllerGroup": "k6.io", "controllerKind": "K6", "source": "kind source: *v1.Job"}
2024-11-26T14:16:08Z INFO Starting EventSource {"controller": "testrun", "controllerGroup": "k6.io", "controllerKind": "TestRun", "source": "kind source: *v1.Job"}
2024-11-26T14:16:08Z INFO Starting EventSource {"controller": "k6", "controllerGroup": "k6.io", "controllerKind": "K6", "source": "kind source: *v1.Pod"}
2024-11-26T14:16:08Z INFO Starting Controller {"controller": "k6", "controllerGroup": "k6.io", "controllerKind": "K6"}
2024-11-26T14:16:08Z INFO Starting EventSource {"controller": "testrun", "controllerGroup": "k6.io", "controllerKind": "TestRun", "source": "kind source: *v1.Pod"}
2024-11-26T14:16:08Z INFO Starting Controller {"controller": "testrun", "controllerGroup": "k6.io", "controllerKind": "TestRun"}
2024-11-26T14:16:08Z INFO Starting EventSource {"controller": "privateloadzone", "controllerGroup": "k6.io", "controllerKind": "PrivateLoadZone", "source": "kind source: *v1alpha1.PrivateLoadZone"}
2024-11-26T14:16:08Z INFO Starting Controller {"controller": "privateloadzone", "controllerGroup": "k6.io", "controllerKind": "PrivateLoadZone"}
W1126 14:16:08.062283 1 warnings.go:70] This CRD is deprecated in favor of testruns.k6.io
W1126 14:16:08.251492 1 warnings.go:70] This CRD is deprecated in favor of testruns.k6.io
2024-11-26T14:16:11Z INFO Starting workers {"controller": "privateloadzone", "controllerGroup": "k6.io", "controllerKind": "PrivateLoadZone", "worker count": 1}
2024-11-26T14:16:11Z INFO Starting workers {"controller": "testrun", "controllerGroup": "k6.io", "controllerKind": "TestRun", "worker count": 1}
2024-11-26T14:16:11Z INFO Starting workers {"controller": "k6", "controllerGroup": "k6.io", "controllerKind": "K6", "worker count": 1}
k6-operator version or image
v0.0.18
Helm chart version (if applicable)
3.10.1
TestRun / PrivateLoadZone YAML
none, just controller deployment for now is in crashloop
Other environment details (if applicable)
No response
Steps to reproduce the problem
installed helm chart into a custom namespace and the
lastTransitionTime: "2024-11-26T14:20:47Z"
lastUpdateTime: "2024-11-26T14:20:47Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
observedGeneration: 2
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
Expected behaviour
it keeps bouncing
kubectl events
7m15s Normal Scheduled pod/support-k6-operator-controller-manager-6fff986c94-kggxc Successfully assigned rc-e2e-api-49059/support-k6-operator-controller-manager-6fff986c94-kggxc to ip-10-17-129-207.ec2.internal
7m15s Normal Pulled pod/support-k6-operator-controller-manager-6fff986c94-kggxc Container image "ghcr.io/grafana/k6-operator:controller-v0.0.18" already present on machine
7m15s Normal Created pod/support-k6-operator-controller-manager-6fff986c94-kggxc Created container manager
7m15s Normal Started pod/support-k6-operator-controller-manager-6fff986c94-kggxc Started container manager
7m15s Normal Pulled pod/support-k6-operator-controller-manager-6fff986c94-kggxc Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0" already present on machine
7m15s Normal Created pod/support-k6-operator-controller-manager-6fff986c94-kggxc Created container kube-rbac-proxy
7m15s Normal Started pod/support-k6-operator-controller-manager-6fff986c94-kggxc Started container kube-rbac-proxy
7m7s Normal Killing pod/support-k6-operator-controller-manager-6fff986c94-kggxc Stopping container manager
7m7s Normal Killing pod/support-k6-operator-controller-manager-6fff986c94-kggxc Stopping container kube-rbac-proxy
9m59s Normal Scheduled pod/support-k6-operator-controller-manager-6fff986c94-r922g Successfully assigned rc-e2e-api-49059/support-k6-operator-controller-manager-6fff986c94-r922g to ip-10-17-24-62.ec2.internal
9m58s Normal Pulling pod/support-k6-operator-controller-manager-6fff986c94-r922g Pulling image "ghcr.io/grafana/k6-operator:controller-v0.0.18"
9m57s Normal Pulled pod/support-k6-operator-controller-manager-6fff986c94-r922g Successfully pulled image "ghcr.io/grafana/k6-operator:controller-v0.0.18" in 899ms (899ms including waiting)
9m1s Normal Created pod/support-k6-operator-controller-manager-6fff986c94-r922g Created container manager
9m1s Normal Started pod/support-k6-operator-controller-manager-6fff986c94-r922g Started container manager
9m57s Normal Pulling pod/support-k6-operator-controller-manager-6fff986c94-r922g Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0"
9m55s Normal Pulled pod/support-k6-operator-controller-manager-6fff986c94-r922g Successfully pulled image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0" in 1.498s (1.498s including waiting)
9m55s Normal Created pod/support-k6-operator-controller-manager-6fff986c94-r922g Created container kube-rbac-proxy
9m55s Normal Started pod/support-k6-operator-controller-manager-6fff986c94-r922g Started container kube-rbac-proxy
8m Normal Pulled pod/support-k6-operator-controller-manager-6fff986c94-r922g Container image "ghcr.io/grafana/k6-operator:controller-v0.0.18" already present on machine
8m15s Warning BackOff pod/support-k6-operator-controller-manager-6fff986c94-r922g Back-off restarting failed container manager in pod support-k6-operator-controller-manager-6fff986c94-r922g_rc-e2e-api-49059(337c18ae-3895-40b2-901e-44e7d8d5b167)
12m Normal Scheduled pod/support-k6-operator-controller-manager-6fff986c94-zd9l7 Successfully assigned rc-e2e-api-49059/support-k6-operator-controller-manager-6fff986c94-zd9l7 to ip-10-17-129-207.ec2.internal
12m Normal Pulling pod/support-k6-operator-controller-manager-6fff986c94-zd9l7 Pulling image "ghcr.io/grafana/k6-operator:controller-v0.0.18"
12m Normal Pulled pod/support-k6-operator-controller-manager-6fff986c94-zd9l7 Successfully pulled image "ghcr.io/grafana/k6-operator:controller-v0.0.18" in 1.705s (1.705s including waiting)
11m Normal Created pod/support-k6-operator-controller-manager-6fff986c94-zd9l7 Created container manager
11m Normal Started pod/support-k6-operator-controller-manager-6fff986c94-zd9l7 Started container manager
12m Normal Pulling pod/support-k6-operator-controller-manager-6fff986c94-zd9l7 Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0"
12m Normal Pulled pod/support-k6-operator-controller-manager-6fff986c94-zd9l7 Successfully pulled image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0" in 1.314s (1.314s including waiting)
12m Normal Created pod/support-k6-operator-controller-manager-6fff986c94-zd9l7 Created container kube-rbac-proxy
12m Normal Started pod/support-k6-operator-controller-manager-6fff986c94-zd9l7 Started container kube-rbac-proxy
10m Normal Pulled pod/support-k6-operator-controller-manager-6fff986c94-zd9l7 Container image "ghcr.io/grafana/k6-operator:controller-v0.0.18" already present on machine
10m Warning BackOff pod/support-k6-operator-controller-manager-6fff986c94-zd9l7 Back-off restarting failed container manager in pod support-k6-operator-controller-manager-6fff986c94-zd9l7_rc-e2e-api-49059(9430411a-236d-49e6-b934-fab8e0fbef89)
12m Normal SuccessfulCreate replicaset/support-k6-operator-controller-manager-6fff986c94 Created pod: support-k6-operator-controller-manager-6fff986c94-zd9l7
9m59s Normal SuccessfulCreate replicaset/support-k6-operator-controller-manager-6fff986c94 Created pod: support-k6-operator-controller-manager-6fff986c94-r922g
7m15s Normal SuccessfulCreate replicaset/support-k6-operator-controller-manager-6fff986c94 Created pod: support-k6-operator-controller-manager-6fff986c94-kggxc
7m7s Normal SuccessfulDelete replicaset/support-k6-operator-controller-manager-6fff986c94 Deleted pod: support-k6-operator-controller-manager-6fff986c94-kggxc
3m19s Normal Scheduled pod/support-k6-operator-controller-manager-7fc9fdcdd5-2kf6g Successfully assigned rc-e2e-api-49059/support-k6-operator-controller-manager-7fc9fdcdd5-2kf6g to ip-10-17-129-207.ec2.internal
81s Normal Pulled pod/support-k6-operator-controller-manager-7fc9fdcdd5-2kf6g Container image "ghcr.io/grafana/k6-operator:latest" already present on machine
81s Normal Created pod/support-k6-operator-controller-manager-7fc9fdcdd5-2kf6g Created container manager
81s Normal Started pod/support-k6-operator-controller-manager-7fc9fdcdd5-2kf6g Started container manager
3m18s Normal Pulled pod/support-k6-operator-controller-manager-7fc9fdcdd5-2kf6g Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0" already present on machine
3m18s Normal Created pod/support-k6-operator-controller-manager-7fc9fdcdd5-2kf6g Created container kube-rbac-proxy
3m18s Normal Started pod/support-k6-operator-controller-manager-7fc9fdcdd5-2kf6g Started container kube-rbac-proxy
93s Warning BackOff pod/support-k6-operator-controller-manager-7fc9fdcdd5-2kf6g Back-off restarting failed container manager in pod support-k6-operator-controller-manager-7fc9fdcdd5-2kf6g_rc-e2e-api-49059(a70561cb-f0d2-4004-9df6-87c951f51fd4)
7m8s Normal Scheduled pod/support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z Successfully assigned rc-e2e-api-49059/support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z to ip-10-17-129-207.ec2.internal
7m7s Normal Pulling pod/support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z Pulling image "ghcr.io/grafana/k6-operator:latest"
7m7s Normal Pulled pod/support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z Successfully pulled image "ghcr.io/grafana/k6-operator:latest" in 111ms (111ms including waiting)
4m6s Normal Created pod/support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z Created container manager
5m5s Normal Started pod/support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z Started container manager
7m7s Normal Pulled pod/support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0" already present on machine
7m7s Normal Created pod/support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z Created container kube-rbac-proxy
7m7s Normal Started pod/support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z Started container kube-rbac-proxy
4m6s Normal Pulled pod/support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z Container image "ghcr.io/grafana/k6-operator:latest" already present on machine
4m20s Warning BackOff pod/support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z Back-off restarting failed container manager in pod support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z_rc-e2e-api-49059(8f7cd8a0-794b-4b18-af58-ff6f1b9ff528)
61s Normal Scheduled pod/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 Successfully assigned rc-e2e-api-49059/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 to ip-10-17-60-77.ec2.internal
61s Normal Pulling pod/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 Pulling image "ghcr.io/grafana/k6-operator:latest"
60s Normal Pulled pod/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 Successfully pulled image "ghcr.io/grafana/k6-operator:latest" in 1.137s (1.137s including waiting)
24s Normal Created pod/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 Created container manager
24s Normal Started pod/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 Started container manager
59s Normal Pulling pod/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0"
58s Normal Pulled pod/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 Successfully pulled image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0" in 1.622s (1.622s including waiting)
58s Normal Created pod/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 Created container kube-rbac-proxy
58s Normal Started pod/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 Started container kube-rbac-proxy
24s Normal Pulled pod/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 Container image "ghcr.io/grafana/k6-operator:latest" already present on machine
1s Warning BackOff pod/support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5 Back-off restarting failed container manager in pod support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5_rc-e2e-api-49059(7a50edb9-b9db-469b-a7aa-ebc2ed58fb67)
7m8s Normal SuccessfulCreate replicaset/support-k6-operator-controller-manager-7fc9fdcdd5 Created pod: support-k6-operator-controller-manager-7fc9fdcdd5-c5n8z
3m19s Normal SuccessfulCreate replicaset/support-k6-operator-controller-manager-7fc9fdcdd5 Created pod: support-k6-operator-controller-manager-7fc9fdcdd5-2kf6g
61s Normal SuccessfulCreate replicaset/support-k6-operator-controller-manager-7fc9fdcdd5 Created pod: support-k6-operator-controller-manager-7fc9fdcdd5-kp7w5
12m Normal ScalingReplicaSet deployment/support-k6-operator-controller-manager Scaled up replica set support-k6-operator-controller-manager-6fff986c94 to 1
7m8s Normal ScalingReplicaSet deployment/support-k6-operator-controller-manager Scaled up replica set support-k6-operator-controller-manager-7fc9fdcdd5 to 1
7m7s Normal ScalingReplicaSet deployment/support-k6-operator-controller-manager Scaled down replica set support-k6-operator-controller-manager-6fff986c94 to 0 from 1
Actual behaviour
does not stay stable
The text was updated successfully, but these errors were encountered:
alexmtrmd
changed the title
Pod [support-k6-operator-controller-manager-6fff986c94-rc2ks] is in CrashLoopBackOff for the [4] time, sending a restart request to reset backoff period
Pod [support-k6-operator-controller-manager-6fff986c94-rc2ks] is in CrashLoopBackOff for the [4] time
Nov 26, 2024
Hi @alexmtrmd, you mention that the operator has been going into CrashLoopBackOff but that's not present in the events you posted nor there are any errors in the logs. Could you please double check what exactly is causing container to shut down?
Additionally, "how to reproduce" does not seem complete at the moment.
Brief summary
Logs from pod do not show any issues. I tried changing the image from
grafana/k6-operator:controller-v0.0.18
tografana/k6-operator:latest
and still not having any luckk6-operator version or image
v0.0.18
Helm chart version (if applicable)
3.10.1
TestRun / PrivateLoadZone YAML
none, just controller deployment for now is in crashloop
Other environment details (if applicable)
No response
Steps to reproduce the problem
installed helm chart into a custom namespace and the
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
meta.helm.sh/release-name: support
meta.helm.sh/release-namespace: rc-e2e-api-49059
creationTimestamp: "2024-11-26T14:09:17Z"
generation: 2
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: support
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: k6-operator
app.kubernetes.io/part-of: k6-operator
app.kubernetes.io/version: 0.0.18
control-plane: controller-manager
helm.sh/chart: k6-operator-3.10.1
name: support-k6-operator-controller-manager
namespace: rc-e2e-api-49059
resourceVersion: "1174488240"
uid: 126b81ed-d466-4774-bb21-3f4febf7654f
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: support
app.kubernetes.io/name: k6-operator
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: support
app.kubernetes.io/name: k6-operator
control-plane: controller-manager
spec:
containers:
- args:
- --metrics-addr=127.0.0.1:8080
command:
- /manager
image: ghcr.io/grafana/k6-operator:latest
imagePullPolicy: IfNotPresent
name: manager
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 50Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- args:
- --secure-listen-address=0.0.0.0:8443
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=10
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0
imagePullPolicy: IfNotPresent
name: kube-rbac-proxy
ports:
- containerPort: 8443
name: https
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: k6-operator-controller
serviceAccountName: k6-operator-controller
terminationGracePeriodSeconds: 10
status:
conditions:
lastUpdateTime: "2024-11-26T14:14:55Z"
message: ReplicaSet "support-k6-operator-controller-manager-7fc9fdcdd5" has successfully
progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
lastUpdateTime: "2024-11-26T14:20:47Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
observedGeneration: 2
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
Expected behaviour
it keeps bouncing
kubectl events
Actual behaviour
does not stay stable
The text was updated successfully, but these errors were encountered: