Skip to content

Commit

Permalink
improve the firewall rules we use, so we are no longer touching the f…
Browse files Browse the repository at this point in the history
…irewall on the kubenodes, and have a process for running restund alongside our other services. (#594)
  • Loading branch information
julialongtin authored Jan 5, 2023
1 parent 7d1bdbe commit 6517979
Show file tree
Hide file tree
Showing 2 changed files with 65 additions and 18 deletions.
30 changes: 18 additions & 12 deletions offline/docs.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,9 +129,14 @@ You'll need at least 3 `kubenode`s. 3 of them should be added to the
additional nodes should only be added to the `[kube-node]` group.

### Setting up databases and kubernetes to talk over the correct (private) interface
If you are deploying wire on servers that are expected to use one interface to talk to the public, and a separate interface to talk amongst themselves, you will need to add "ip=" declarations for the private interface of each node. for instance, if the first kubenode was expected to talk to the world on 172.16.0.129, but speak to other wire services (kubernetes, databases, etc) on 192.168.0.2, you should edit its entry like the following:
```
kubenode1 ansible_host=172.16.0.129 ip=192.168.0.2
```
Do this for all of the instances.

* For `kubenode`s make sure that `ip` is set to the IP on which the nodes should talk among eachother.
* Make sure that `assethost` is present in the inventory file with the correct `ansible_host` and `ip` values
### Setting up Database network interfaces.
* Make sure that `assethost` is present in the inventory file with the correct `ansible_host` (and `ip` values if required)
* Make sure that `cassandra_network_interface` is set to the interface on which
the kubenodes can reach cassandra and on which the cassandra nodes
communicate among eachother. Your private network.
Expand Down Expand Up @@ -400,25 +405,20 @@ export KUBENODE1IP=<your.kubernetes.node.ip>
then run the following:
```
sudo iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 80 -j DNAT --to-destination $KUBENODE1IP:80
sudo iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 443 -j DNAT --to-destination $KUBENODE1IP:443
sudo iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 80 -j DNAT --to-destination $KUBENODE1IP:31772
sudo iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 443 -j DNAT --to-destination $KUBENODE1IP:31773
```
or add an appropriate rule to a config file (for UFW, /etc/ufw/before.rules)
Then ssh into the first kubenode and make the following configuration:
```
sudo iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 31773
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 31772
```
###### Mirroring the public IP
cert-manager has a requirement on being able to reach the kubernetes on it's external IP. this is trouble, because in most security concious environments, the external IP is not owned by any of the kubernetes hosts.
on an IP Masquerading router, you can redirect outgoing traffic from your cluster, that is to say, when the cluster asks to connect to your external IP, you can instead choose to send it to a kubernetes node inside of the cluster.
```
export INTERNALINTERFACE=br0
sudo iptables -t nat -A PREROUTING -i $INTERNALINTERFACE -d $PUBLICIPADDRESS -p tcp -m multiport --dports 80,443 -j DNAT --to-destination $KUBENODE1IP
sudo iptables -t nat -A PREROUTING -i $INTERNALINTERFACE -d $PUBLICIPADDRESS -p tcp --dport 80 -j DNAT --to-destination $KUBENODE1IP:31772
sudo iptables -t nat -A PREROUTING -i $INTERNALINTERFACE -d $PUBLICIPADDRESS -p tcp --dport 443 -j DNAT --to-destination $KUBENODE1IP:31773
```
### Incoming Calling Traffic
Expand Down Expand Up @@ -478,7 +478,7 @@ d helm install nginx-ingress-services ./charts/nginx-ingress-services --values .
Do not try to use paths to refer to the certificates, as the 'd' command messes with file paths outside of Wire-Server.
##### In your nginx config
##### In your nginx-ingress-services values file
Change the domains in `values.yaml` to your domain. And add your wildcard or SAN certificate that is valid for all these
domains to the `secrets.yaml` file.
Expand Down Expand Up @@ -518,6 +518,11 @@ d helm upgrade --install -n cert-manager-ns --set 'installCRDs=true' cert-manage
d helm upgrade --install nginx-ingress-services charts/nginx-ingress-services -f values/nginx-ingress-services/values.yaml
```
Watch the output of the following command to know how your request is going:
```
d kubectl get certificate
```
#### Old wire-server releases
on older wire-server releases, nginx-ingress-services may fail to deploy. some version numbers of services have changed. make the following changes, and try to re-deploy till it works.
Expand Down Expand Up @@ -576,3 +581,4 @@ d helm upgrade --install sftd ./charts/sftd \
--set-file tls.key=/path/to/tls.key \
--values values/sftd/values.yaml
```
53 changes: 47 additions & 6 deletions offline/kvm-hetzner.md
Original file line number Diff line number Diff line change
Expand Up @@ -451,30 +451,71 @@ switch to docs.md.
skip down to 'Making tooling available in your environment'

when editing the inventory, create 'ansnode' entries, rather than separate cassandra, elasticsearch, and minio nodes.
```
ansnode1 ansible_host=172.16.0.132
ansnode2 ansible_host=172.16.0.133
ansnode3 ansible_host=172.16.0.134
```

Add all three ansnode entries into the `cassandra` `elasticsearch`, and `minio` sections.
Add all three ansnode entries into the `cassandra` `elasticsearch`, and `minio` sections. They should look like the following:
```
[elasticsearch]
# elasticsearch1
# elasticsearch2
# elasticsearch3
ansnode1
ansnode2
ansnode3
add two of the ansnode entries into the `restund` section
add one of the ansnode entries into the `cassandra_seed` section.
[minio]
# minio1
# minio2
# minio3
ansnode1
ansnode2
ansnode3
[cassandra]
# cassandra1
# cassandra2
# cassandra3
```

Add two of the ansnode entries into the `restund` section

ERROR: after you install restund, the restund firewall will fail to start.
delete the out rule to 172.16.0.0/12
Add one of the ansnode entries into the `cassandra_seed` section.

### ERROR: after you install restund, the restund firewall will fail to start.

delete the outbound rule to 172.16.0.0/12
```
sudo ufw status numbered
sudo ufw delete <right number>
```

#### enable the ports colocated services run on:
cassandra:
```
sudo ufw allow 9042/tcp
sudo ufw allow 9160/tcp
sudo ufw allow 7000/tcp
sudo ufw allow 7199/tcp
```

elasticsearch:
```
sudo ufw allow 9300/tcp
sudo ufw allow 9200/tcp
```

minio:
```
sudo ufw allow 9000/tcp
sudo ufw allow 9092/tcp
```

#### install turn pointing to port 8080



install turn pointing to 8080

0 comments on commit 6517979

Please sign in to comment.