Skip to content
This repository has been archived by the owner on Feb 14, 2023. It is now read-only.

registry-buddy healthcheck failing #674

Open
sathishbob opened this issue Jul 22, 2021 · 7 comments
Open

registry-buddy healthcheck failing #674

sathishbob opened this issue Jul 22, 2021 · 7 comments

Comments

@sathishbob
Copy link

Hi,

I am trying CF release v5.1.0

The cluster is up, but the cf-api-server and cf-api-worker are going into CrashLoopBackOff .

While checking it seems registry-buddy on both the pods are failing on health check.

Below are the logs.

cf-system cf-api-controllers-cccbb945c-z25td 3/3 Running 1 33m
cf-system cf-api-deployment-updater-647fcf5477-mtzfl 2/2 Running 0 33m
cf-system cf-api-server-6d54fd6d77-c6x6z 5/6 CrashLoopBackOff 13 30m
cf-system cf-api-worker-7b554b6c9c-hbkgj 2/3 CrashLoopBackOff 12 30m
cf-system eirini-api-c48f5b-bcccf 2/2 Running 0 33m

registry-buddy:
Container ID: containerd://e72a2c6ab8955e228dd54ce645744b77dd99784cb178211087d5abb8613c06b2
Image: cloudfoundry/cf-api-package-registry-buddy@sha256:163aca64a4e0aa1a3c8a9555d13b3c7218ae059c1ec2d986d783d961daa52d1d
Image ID: docker.io/cloudfoundry/cf-api-package-registry-buddy@sha256:163aca64a4e0aa1a3c8a9555d13b3c7218ae059c1ec2d986d783d961daa52d1d
Port:
Host Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 23 Jul 2021 00:44:26 +0530
Finished: Fri, 23 Jul 2021 00:45:01 +0530
Ready: False
Restart Count: 11
Liveness: exec [curl --silent --fail --show-error localhost:8080/healthz] delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
REGISTRY_BASE_PATH: sathishbob
REGISTRY_USERNAME: <set to the key 'username' in secret 'cc-package-registry-upload-secret-ver-1'> Optional: false
REGISTRY_PASSWORD: <set to the key 'password' in secret 'cc-package-registry-upload-secret-ver-1'> Optional: false
Mounts:
/tmp/packages from tmp-packages (rw)
/var/run/secrets/kubernetes.io/serviceaccount from cf-api-server-service-account-token-j5l9w (ro)

@cf-gitbot
Copy link
Collaborator

We have created an issue in Pivotal Tracker to manage this:

https://www.pivotaltracker.com/story/show/178979988

The labels on this github issue will be updated when the story is started.

@arunkpatra
Copy link

I'm having same issue too.

@bhordupur
Copy link

bhordupur commented Jul 24, 2021

same issue here

https://github.com/cloudfoundry/cf-for-k8s/blob/develop/config/capi/_ytt_lib/capi-k8s-release/config/worker_deployment.yml#L88

How could that be 15000m (15 vCPU)? Should it be 1500m instead? I tried to run the deployment with 2500m (and some random higher number for the cpu limits but no luck).

Screenshot 2021-07-24 at 23 45 59

Screenshot 2021-07-24 at 23 48 57

Screenshot 2021-07-24 at 23 57 35

Screenshot 2021-07-24 at 23 59 34

@amalagaura
Copy link

amalagaura commented Jul 30, 2021

As of v5.0.0 custom registries seem no longer supported. My registry-buddy error is:

2021/07/30 16:53:09 Error from healthyFunc(custom-docker-registry.net): error setting up transport to the registry: GET https://auth.docker.io/token?scope=repository%3Alibrary%2Fcustom-docker-registry.net%3Apush%2Cpull&service=registry.docker.io: unsupported status code 401

My cf-values.yaml is:

app_registry:
  hostname: custom-docker-registry.net
  repository_prefix: custom-docker-registry.net
  username: registry-access

This worked in v4.2.0. I don't see any documentation of this change. Is this the same issue that others are reporting or should I create a separate issue?

@ghost
Copy link

ghost commented Aug 3, 2021

Same issue here with v5.1 and harbor:

Error from healthyFunc(registry.fluidcloud.bskyb.com/nimbus-test): error setting up transport to the registry: GET https://registry.fluidcloud.bskyb.com/service/token?scope=repository%3Animbus-test%3Apush%2Cpull&service=harbor-registry: unsupported status code 500

@devops-school
Copy link

Somehow i made it working and i tried this
https://www.devopsschool.com/blog/getting-started-with-cloud-foundry-for-kubernetes-using-cf-for-k8s-in-linuxubuntu/

@gitricko
Copy link

I am facing this issue too... increasing the timeout does not seems to work

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants