-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Liqo does not work with Cilium with eBPF Host Routing or conntrack disabled #2166
Comments
There's another issue too. Even without enabled eBPF routing. liqo-auth is spammed by EOF errors;
source addresses above are Cilium "routers" at nodes. |
That's because they open and close TCP connections to the service. |
Hi, sorry for the late reply. We are starting to investigate your issues. @yoctozepto and @stelucz, have you encountered these problems only with in-band peering or even with out-of-band? |
Hi @cheina97 my "problem" with errors in logs is just after Liqo deployment, no peering established so far. |
Thanks |
We're trying to peer two GKE clusters, where the destination cluster got Dataplane V2 (Cilium based) and we also encounter those Peering failed to send identity request: Post "[https://10.131.0.3:443/identity/certificate](https://10.131.0.3/identity/certificate)": context deadline exceeded (
Client.Timeout exceeded while awaiting headers) If peering |
Hey folks. FWIW, I'm also experiencing this with Cilium (chart version ---
eni:
enabled: true
awsEnablePrefixDelegation: true
awsReleaseExcessIPs: true
ipam:
mode: eni
egressMasqueradeInterfaces: eth+
tunnel: disabled
hubble:
relay:
enabled: true
ui:
enabled: false
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: liqo.io/type
operator: DoesNotExist Liqo "consumer" cluster is EKS with 1.29.4 and "producer" cluster is GKE with 1.29.3. Is it reasonable to expect that this will work any time soon? Update (07.06.24):
As such, keep that in mind when installing Liqo directly with Helm or with Using One-liner example: $ liqoctl --context=some-cluster install eks \
--eks-cluster-region=${EKS_CLUSTER_REGION} \
--eks-cluster-name=${EKS_CLUSTER_NAME} \
--user-name liqo-cluster-user \
--set auth.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-scheme"=internet-facing \
--set gateway.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-scheme"=internet-facing |
What happened:
Peering Liqo clusters where either one has Cilium with either eBPF Host Routing [1] (requires and is enabled by default after enabling kube-proxy replacement and eBPF masquerading) or bypassing iptables (netfilter) Connection Tracking (conntrack) [2] results in the Liqo Wireguard VPN tunnel dropping the packets along the way. For example, trying the in-band peering will fail on authentication because the two control planes do not really see each other (despite the "successful" tunnel establishment).
[1] https://docs.cilium.io/en/stable/operations/performance/tuning/#ebpf-host-routing
[2] https://docs.cilium.io/en/stable/operations/performance/tuning/#bypass-iptables-connection-tracking
What you expected to happen:
I expect Liqo to work in this situation.
How to reproduce it (as minimally and precisely as possible):
Deploy Cilium on a modern kernel (see the referenced docs) with the following minimal
values.yaml
file contents:Anything else we need to know?:
Environment:
kubectl version
):The text was updated successfully, but these errors were encountered: