From 9f768b606732facf344114de14f6a0fda8d30456 Mon Sep 17 00:00:00 2001 From: Stefan Matting Date: Tue, 12 Sep 2023 18:24:01 +0200 Subject: [PATCH] wip2 --- offline/docs_ubuntu_22.04.md | 97 +++++++++++++++++++----------------- 1 file changed, 52 insertions(+), 45 deletions(-) diff --git a/offline/docs_ubuntu_22.04.md b/offline/docs_ubuntu_22.04.md index 2c89ed2b4..d89ba25f9 100644 --- a/offline/docs_ubuntu_22.04.md +++ b/offline/docs_ubuntu_22.04.md @@ -530,38 +530,34 @@ d helm install ingress-nginx-controller ./charts/ingress-nginx-controller --valu #### Using network services -Most enterprises have network service teams to forward traffic appropriately. Ask that your network team forward TCP port 443 to each one of the kubernetes servers on port 31773. ask the same for port 80, directing it to 31772. - -If they ask for clarification, a longer way of explaining it is "wire expects https traffic to be on port 31773, and http traffic to go to port 80. a load balancing rule needs to be in place, so that no matter which kubernetes host is up or down, the router will direct traffic to one of the operational kubernetes nodes. any node that accepts connections on port 31773 and 31772 can be considered as operational." +The goal of the section is to forward traffic on ports 443 and 80 to the kubernetes node(s) that run(s) ingress service. +Wire expected https traffic port 443 to be forwarded to port 31773 and http traffic on port 80 to be forwarded to port 31772. #### Through an IP Masquerading Firewall Your ip masquerading firewall must forward port 443 and port 80 to one of the kubernetes nodes (which must always remain online). Additionally, if you want to use letsEncrypt CA certificates, items behind your firewall must be redirected to your kubernetes node, when the cluster is attempting to contact the outside IP. -The following instructions are given only as an example. -Properly configuring IP Masquerading requires a seasoned linux administrator with deep knowledge of networking. -They assume all traffic destined to your wire cluster is going through a single IP masquerading firewall, running some modern version of linux. +The following instructions are given only as an example. Depending on your network setup different dns masquarading rules are required. +In the following all traffic destined to your wire cluster is going through a single IP masquerading firewall. ##### Incoming SSL Traffic -Here, you should check the ethernet interface name for your outbound IP. -``` -ip ro | sed -n "/default/s/.* dev \([enpso0-9]*\) .*/export OUTBOUNDINTERFACE=\1/p" +To prepare determine the interface of your outbound IP: +``` export OUTBOUNDINTERFACE=$(ip ro | sed -n "/default/s/.* dev \([enpso0-9]*\) .*/\1/p") echo "OUTBOUNDINTERFACE is $OUTBOUNDINTERFACE" ``` -This will return a shell command setting a variable to your default interface. copy and paste it. next, supply your outside IP address: +Please check that `OUTBOUNDINTERFACE` is correctly set, before continuning. + +Supply your outside IP address: + ``` export PUBLICIPADDRESS= ``` -Select one of your kubernetes nodes that you are fine with losing service if it is offline (for example kubenode3): - -Make sure it is the same pod on which ingress-nginx is running: - 1. Find out on which node `ingress-nginx` is running: ``` d kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName @@ -576,68 +572,79 @@ export KUBENODEIP= then, if case the server owns the public IP (i.e. you can see the IP in `ip addr`), run the following: ``` sudo bash -c " -set -eo pipefail; +set -xeo pipefail; -echo meh: $OUTBOUNDINTERFACE -# iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 80 -j DNAT --to-destination $KUBENODEIP:31772; -# iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 443 -j DNAT --to-destination $KUBENODEIP:31773; +iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 80 -j DNAT --to-destination $KUBENODEIP:31772; +iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 443 -j DNAT --to-destination $KUBENODEIP:31773; " ``` +If you are running a UFW firewall, make sure to add these iptables rules to /etc/ufw/before.rules, so they persist after a reboot. + If your server is being forwarded traffic from another firewall (you do not see the IP in `ip addr`), run the following: ``` -sudo iptables -t nat -A PREROUTING -i $OUTBOUNDINTERFACE -p tcp --dport 80 -j DNAT --to-destination $KUBENODEIP:31772 -sudo iptables -t nat -A PREROUTING -i $OUTBOUNDINTERFACE -p tcp --dport 443 -j DNAT --to-destination $KUBENODEIP:31773 -``` +sudo bash -c " +set -eo pipefail; -If you are running a UFW firewall, make sure to add the above iptables rules to /etc/ufw/before.rules, so they persist after a reboot. +iptables -t nat -A PREROUTING -i $OUTBOUNDINTERFACE -p tcp --dport 80 -j DNAT --to-destination $KUBENODEIP:31772; +iptables -t nat -A PREROUTING -i $OUTBOUNDINTERFACE -p tcp --dport 443 -j DNAT --to-destination $KUBENODEIP:31773; +" +``` +If you are running a UFW firewall, make sure to add these iptables rules to /etc/ufw/before.rules, so they persist after a reboot. If you are running a UFW firewall, make sure to allow inbound traffic on 443 and 80: ``` -sudo ufw enable -sudo ufw allow in on $OUTBOUNDINTERFACE proto tcp to any port 443 -sudo ufw allow in on $OUTBOUNDINTERFACE proto tcp to any port 80 +sudo bash -c " +set -eo pipefail; + +ufw enable; +ufw allow in on $OUTBOUNDINTERFACE proto tcp to any port 443; +ufw allow in on $OUTBOUNDINTERFACE proto tcp to any port 80; +" ``` ###### Mirroring the public IP -cert-manager has a requirement on being able to reach the kubernetes on it's external IP. this is trouble, because in most security concious environments, the external IP is not owned by any of the kubernetes hosts. +`cert-manager` has a requirement on being able to reach the kubernetes on its external IP. This is trouble, because in most security concious environments, the external IP is not owned by any of the kubernetes hosts. -on an IP Masquerading router, you can redirect outgoing traffic from your cluster, that is to say, when the cluster asks to connect to your external IP, you can instead choose to send it to a kubernetes node inside of the cluster. +On an IP Masquerading router, you can redirect outgoing traffic from your cluster, i.e. when the cluster asks to connect to your external IP, you can instead choose to send it to a kubernetes node inside of the cluster. ``` export INTERNALINTERFACE=br0 -sudo iptables -t nat -A PREROUTING -i $INTERNALINTERFACE -d $PUBLICIPADDRESS -p tcp --dport 80 -j DNAT --to-destination $KUBENODEIP:31772 -sudo iptables -t nat -A PREROUTING -i $INTERNALINTERFACE -d $PUBLICIPADDRESS -p tcp --dport 443 -j DNAT --to-destination $KUBENODEIP:31773 +sudo bash -c " +set -xeo pipefail; + +iptables -t nat -A PREROUTING -i $INTERNALINTERFACE -d $PUBLICIPADDRESS -p tcp --dport 80 -j DNAT --to-destination $KUBENODEIP:31772; +iptables -t nat -A PREROUTING -i $INTERNALINTERFACE -d $PUBLICIPADDRESS -p tcp --dport 443 -j DNAT --to-destination $KUBENODEIP:31773; +" ``` +If you are running a UFW firewall, make sure to add these iptables rules to /etc/ufw/before.rules, so they persist after a reboot. + ### Incoming Calling Traffic -Here, you should check the ethernet interface name for your outbound IP. -``` -ip ro | sed -n "/default/s/.* dev \([enps0-9]*\) .*/export OUTBOUNDINTERFACE=\1/p" -``` +Make sure `OUTBOUNDINTERFACE` and `PUBLICIPADDRESS` are exported (see above). -This will return a shell command setting a variable to your default interface. copy and paste it. next, supply your outside IP address: -``` -export PUBLICIPADDRESS= -``` +Select one of your kubernetes nodes that hosts restund: -Select one of your kubernetes nodes that you are fine with losing service if it is offline: ``` export RESTUND01IP= ``` then run the following: ``` -sudo iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 80 -j DNAT --to-destination $RESTUND01IP:80 -sudo iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p udp --dport 80 -j DNAT --to-destination $RESTUND01IP:80 -sudo iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p udp -m udp --dport 32768:60999 -j DNAT --to-destination $RESTUND01IP +sudo bash -c " +set -eo pipefail; + +iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 80 -j DNAT --to-destination $RESTUND01IP:80; +iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p udp --dport 80 -j DNAT --to-destination $RESTUND01IP:80; +iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p udp -m udp --dport 32768:60999 -j DNAT --to-destination $RESTUND01IP; +" ``` or add an appropriate rule to a config file (for UFW, /etc/ufw/before.rules) -### Changing the TURN port. +### Changing the TURN port FIXME: ansibleize this! turn's connection port for incoming clients is set to 80 by default. to change it: @@ -653,8 +660,8 @@ SSL certificates are required by the nginx-ingress-services helm chart. You can Move the example values for `nginx-ingress-services`: ``` -mv ./values/nginx-ingress-services/prod-values.example.yaml ./values/nginx-ingress-services/values.yaml -mv ./values/nginx-ingress-services/prod-secrets.example.yaml ./values/nginx-ingress-services/secrets.yaml +cp ./values/nginx-ingress-services/prod-values.example.yaml ./values/nginx-ingress-services/values.yaml +cp ./values/nginx-ingress-services/prod-secrets.example.yaml ./values/nginx-ingress-services/secrets.yaml ``` #### Bring your own certificates @@ -682,7 +689,7 @@ d helm install nginx-ingress-services ./charts/nginx-ingress-services --values . #### Use letsencrypt generated certificates -If you are using a single external IP and no route than you need to make sure that the cert-manger pods are not deployed on the same node as ingress-nginx-controller node. +If you are using a single external IP and no route then you need to make sure that the cert-manger pods are not deployed on the same node as ingress-nginx-controller node. To do that...check where ingress-nginx-controller pod is running on -