-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vCluster creating more trouble than helping(due to different causes) #1787
Comments
hey @MichaelKora , it's unfortunate that have to experience these troubles. One thing that I'd recommend is to use the latest vcluster CLI, together with Regarding the other issues: It's a bit hard to say from the outset what might causing your issues. You seem to be leveraging Talos. What Kubernetes distro is running on top of it? |
hey @heiko-braun 0.19.5 is the latest according to vcluster cli $ sudo vcluster upgrade
15:55:48 info Current binary is the latest version: 0.19.5 i have a TalosRunning there( the default image) its based on k3s |
Hi @MichaelKora, you can get the latest CLI (the one to be used with 0.20 vcluster.yaml) here: |
@MichaelKora the hazelcast and solr examples in the description, did you run the commands against the host cluster or the virtual one? |
@heiko-braun thanks for your response I run the command against the vcluster... when run against the host cluster, I have no issues |
@heiko-braun when the cluster is being created, the logs show: 2024-06-05 14:38:37 INFO setup/controller_context.go:196 couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6443/version": dial tcp 127.0.0.1:6443: connect: connection refused), will retry in 1 seconds {"component": "vcluster"}
2024-06-05 14:38:38 INFO setup/controller_context.go:196 couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6443/version": dial tcp 127.0.0.1:6443: connect: connection refused), wil l retry in 1 seconds {"component": "vcluster"}
2024-06-05 14:38:39 INFO setup/controller_context.go:196 couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6443/version": dial tcp 127.0.0.1:6443: connect: connection refused), will retry in 1 seconds {"component": "vcluster"}
2024-06-05 14:38:40 INFO commandwriter/commandwriter.go:126 error retrieving resource lock kube-system/kube-controller-manager: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": dial tcp 127.0.0.1:6443: connect: connection refused {"component": "vcluster", "component": "controller-manager", "location": "leaderelection.go:33 2"} and it takes more than 60min before bringing the cluster to an healthy state..that seems verry odd to me that it takes that long to create a virt cluster |
@MichaelKora how many nodes does your host cluster have, and what capacity? do you use network policies? |
hey @everflux i dedicated 2nodes of the host cluster to the vcluster...8cpu/32GB..i am not using any restrictive network policies |
This sounds like a setup problem to me, either with the host cluster or vcluster. Did you try to setup one or multiple vclusters? (check kubectl get all -n vcluster-ns, kubectl get ns) |
@MichaelKora Are you still having issues or were you able to resolve them? |
hey @deniseschannon, yes i am still having the issue! |
@everflux i have just one setup |
@deniseschannon @heiko-braun @everflux Any update on the origin of that issue and how to fix it? |
I think the slack channel or direct consulting is a better place for support than a github issue in this case. |
What happened?
I honestly if it is supposed to be that hard but vcluster is creating more trouble than solutions...i've been working on getting a prod ready cluster for over a week and its not working. Right now the cluster is up and running and connect to it using NodePort Service... the issues:
coredns
pod though in staterunning
is full of errors:When connected to the vcluster, requests delivers different responses each time:
EG: Running
kubectl get namespaces
might show 4 Namespaces; then 4, and then 6 etc.Running helm on the vcluster is nearly impossible..it times out nearly every single time..
This is all bordering because i was expecting vCluster way easier to use.
What did you expect to happen?
i deployed a cluster and expected the coreDNS pods to be deployed
How can we reproduce it (as minimally and precisely as possible)?
helm upgrade -i my-vcluster vcluster \ --repo https://charts.loft.sh \ --namespace vcluster-ns --create-namespace \ --repository-config='' \ -f vcluster.yaml \ --version 0.20.0-beta.5
Anything else we need to know?
i used a nodePort service to connect to the cluster
Host cluster Kubernetes version
Host cluster Kubernetes distribution
vlcuster version
Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)
OS and Arch
The text was updated successfully, but these errors were encountered: