From 9158ba9ab87846b76318d35526eb20b9d93a9773 Mon Sep 17 00:00:00 2001 From: Julia Longtin Date: Sun, 14 Aug 2022 20:24:06 +0100 Subject: [PATCH] Update docs (#570) * general improvements, add workaround for deb archive key expiry, and add more routing information. * add initial kvm-hetzner howto. how to deploy wire on a single hetzner physical machine. * make the easier path the default. * add old debian key workaround. ` ` --- offline/docs.md | 201 +++++++++++++++++++---- offline/kvm-hetzner.md | 360 +++++++++++++++++++++++++++++++++++++++++ offline/upgrading.md | 110 ++++++++++--- 3 files changed, 623 insertions(+), 48 deletions(-) create mode 100644 offline/kvm-hetzner.md diff --git a/offline/docs.md b/offline/docs.md index 52ad3d6d8..4646922a0 100644 --- a/offline/docs.md +++ b/offline/docs.md @@ -102,11 +102,19 @@ The following artifacts are provided: ## Editing the inventory -Open `ansible/inventory/offline/99-static`. Here you will describe the topology -of your offline deploy. There's instructions in the comments on how to set +Copy `ansible/inventory/offline/99-static` to `ansible/inventory/offline/hosts.ini`, and remove the original. +Here you will describe the topology of your offline deploy. There's instructions in the comments on how to set everything up. You can also refer to extra information here. https://docs.wire.com/how-to/install/ansible-VMs.html + +If you are using username/password to log into and sudo up, in the vars:all section, add: +``` +ansible_user= +ansible_password= +ansible_become_password= +``` + ### Configuring kubernetes and etcd You'll need at least 3 `kubenode`s. 3 of them should be added to the @@ -121,7 +129,7 @@ additional nodes should only be added to the `[kube-node]` group. the kubenodes can reach cassandra and on which the cassandra nodes communicate among eachother. Your private network. * Similarly `elasticsearch_network_interface` and `minio_network_interface` - should be set to the private network interface to too. + should be set to the private network interface as well. ### Configuring Restund @@ -173,47 +181,85 @@ Please run: This should generate two files. `./ansible/inventory/group_vars/all/secrets.yaml` and `values/wire-server/secrets.yaml`. - ## Deploying Kubernetes, Restund and stateful services +### WORKAROUND: old debian key +All of our debian archives up to version 4.12.0 used a now-outdated debian repository signature. Some modifications are required to be able to install everything properly. + +First, gather a copy of the 'setup-offline-sources.yml' file from: https://raw.githubusercontent.com/wireapp/wire-server-deploy/kvm_support/ansible/setup-offline-sources.yml . +``` +wget https://raw.githubusercontent.com/wireapp/wire-server-deploy/kvm_support/ansible/setup-offline-sources.yml +``` +copy it into the ansible/ directory: +``` +cp ansible/setup-offline-sources.yml ansible/setup-offline-sources.yml.backup +cp setup-offline-sources.yml ansible/ +``` + +Open it with your prefered text editor and edit the following: +* find a big block of comments and uncomment everything in it `- name: trust everything...` +* after the block you will find `- name: Register offline repo key...`. Comment out that segment (do not comment out the part with `- name: Register offline repo`!) + +Then disable checking for outdated signatures by editing the following file: +``` +ansible/roles/external/kubespray/roles/container-engine/docker/tasks/main.yml +``` +* comment out the block with -name: ensure docker-ce repository public key is installed... +* comment out the next block -name: ensure docker-ce repository is enabled + +Now you are ready to start deploying services. + +#### WORKAROUND: dependency +some ubuntu systems do not have GPG by default. wire assumes this is already present. ensure you have gpg installed on all of your nodes before continuing to the next step. + +### Deploying with Ansible + In order to deploy all the ansible-managed services you can run: ``` -d ./bin/offline-cluster.sh +# d ./bin/offline-cluster.sh ``` +... However a conservitave approach is to perform each step of the script step by step, for better understanding, and better handling of errors.. -However we now explain each step of the script step by step too. For better understanding. +#### Populate the assethost, and prepare to install images from it. Copy over binaries and debs, serves assets from the asset host, and configure other hosts to fetch debs from it: ``` -d ansible-playbook -i ./ansible/inventory/offline ansible/setup-offline-sources.yml +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/setup-offline-sources.yml ``` +If this step fails partway, and you know that parts of it completed, the `--skip-tags debs,binaries,containers,containers-helm,containers-other` tags may come in handy. +#### Kubernetes, part 1 Run kubespray until docker is installed and runs. This allows us to preseed the docker containers that are part of the offline bundle: ``` -d ansible-playbook -i ./ansible/inventory/offline ansible/kubernetes.yml --tags bastion,bootstrap-os,preinstall,container-engine +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/kubernetes.yml --tags bastion,bootstrap-os,preinstall,container-engine ``` +#### Restund Now; run the restund playbook until docker is installed: ``` -d ansible-playbook -i ./ansible/inventory/offline ansible/restund.yml --tags docker +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/restund.yml --tags docker ``` + +#### Pushing docker containers to kubenodes, and restund nodes. With docker being installed on all nodes that need it, seed all container images: ``` -d ansible-playbook -i ./ansible/inventory/offline ansible/seed-offline-docker.yml +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/seed-offline-docker.yml ``` +#### Kubernetes, part 2 Run the rest of kubespray. This should bootstrap a kubernetes cluster successfully: ``` -d ansible-playbook -i ./ansible/inventory/offline ansible/kubernetes.yml --skip-tags bootstrap-os,preinstall,container-engine +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/kubernetes.yml --skip-tags bootstrap-os,preinstall,container-engine ``` +#### Ensuring kubernetes is healthy. Ensure the cluster comes up healthy. The container also contains kubectl, so check the node status: @@ -222,28 +268,28 @@ d kubectl get nodes -owide ``` They should all report ready. + +#### Non-kubernetes services (restund, cassandra, elasticsearch, minio) Now, deploy all other services which don't run in kubernetes. ``` -d ansible-playbook -i ./ansible/inventory/offline ansible/restund.yml -d ansible-playbook -i ./ansible/inventory/offline ansible/cassandra.yml -d ansible-playbook -i ./ansible/inventory/offline ansible/elasticsearch.yml -d ansible-playbook -i ./ansible/inventory/offline ansible/minio.yml +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/restund.yml +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/cassandra.yml +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/elasticsearch.yml +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/minio.yml ``` - - Afterwards, run the following playbook to create helm values that tell our helm charts what the IP addresses of cassandra, elasticsearch and minio are. ``` -d ansible-playbook -i ./ansible/inventory/offline ansible/helm_external.yml +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/helm_external.yml ``` - -## Deploying wire-server using helm +### Deploying Wire It's now time to deploy the helm charts on top of kubernetes, installing the Wire platform. +#### Finding the stateful services First. Make kubernetes aware of where alll the external stateful services are by running: ``` @@ -258,6 +304,7 @@ Also copy the values file for `databases-ephemeral` as it is required for the ne cp values/databases-ephemeral/prod-values.example.yaml values/databases-ephemeral/values.yaml ``` +#### Deploying stateless dependencies Next, we have 4 services that need to be deployed but need no additional configuration: ``` d helm install fake-aws ./charts/fake-aws --values ./values/fake-aws/prod-values.example.yaml @@ -266,10 +313,12 @@ d helm install databases-ephemeral ./charts/databases-ephemeral/ --values ./valu d helm install reaper ./charts/reaper ``` +#### Preparing your values + Next, move `./values/wire-server/prod-values.example.yaml` to `./values/wire-server/values.yaml`. Inspect all the values and adjust domains to your domains where needed. -Add the IPs of your `restund` servers to the `turnStatic.v2` list.: +Add the IPs of your `restund` servers to the `turnStatic.v2` list: ```yaml turnStatic: v1: [] @@ -284,40 +333,132 @@ Open up `./values/wire-server/secrets.yaml` and inspect the values. In theory this file should have only generated secrets, and no additional secrets have to be added, unless additional options have been enabled. +Open up `./values/wire-server/values.yaml` and replace example.com and other domains and subdomain with your domain. You can do it with: + +``` +sed -i 's/example.com//g' values.yaml +``` + + +#### Deploying Wire-Server + Now deploy `wire-server`: ``` d helm install wire-server ./charts/wire-server --timeout=15m0s --values ./values/wire-server/values.yaml --values ./values/wire-server/secrets.yaml ``` +## Directing Traffic to Wire -## Configuring ingress - -First, install the `nginx-ingress-controller`. This requires no configuration: +### Deploy nginx-ingress-controller +This component requires no configuration, and is a requirement for all of the methods we support for getting traffic into your cluster: ``` d helm install nginx-ingress-controller ./charts/nginx-ingress-controller ``` -Next, move the example values for `nginx-ingress-services`: +### Forwarding traffic to your cluster + +#### Using network services + +Most enterprises have network service teams to forward traffic appropriately. Ask that your network team forward TCP port 443 to each one of the kubernetes servers on port 31773. ask the same for port 80, directing it to 31772. + +If they ask for clarification, a longer way of explaining it is "wire expects https traffic to be on port 31773, and http traffic to go to port 80. a load balancing rule needs to be in place, so that no matter which kubernetes host is up or down, the router will direct traffic to one of the operational kubernetes nodes. any node that accepts connections on port 31773 and 31772 can be considered as operational." + +#### Through an IP Masquerading Firewall + +Your ip masquerading firewall must forward port 443 and port 80 to one of the kubernetes nodes (which must always remain online). +Additionally, if you want to use letsEncrypt CA certificates, items behind your firewall must be redirected to your kubernetes node, when the cluster is attempting to contact the outside IP. + +The following instructions are given only as an example. +Properly configuring IP Masquerading requires a seasoned linux administrator with deep knowledge of networking. +They assume all traffic destined to your wire cluster is going through a single IP masquerading firewall, running some modern version of linux. + + +##### Incoming Traffic + +Here, you should check the ethernet interface name for your outbound IP. +``` +ip ro | sed -n "/default/s/.* dev \([enps0-9]*\) .*/export OUTBOUNDINTERFACE=\1/p" +``` + +This will return a shell command setting a variable to your default interface. copy and paste it. next, supply your outside IP address: +``` +export PUBLICADDRESS= +``` + +Select one of your kubernetes nodes that you are fine with losing service if it is offline: +``` +export KUBENODE1IP= +``` + +then run the following: +``` +sudo iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 80 -j DNAT --to-destination $KUBENODE1IP:80 +sudo iptables -t nat -A PREROUTING -d $PUBLICIPADDRESS -i $OUTBOUNDINTERFACE -p tcp --dport 443 -j DNAT --to-destination $KUBENODE1IP:443 +``` +or add an appropriate rule to a config file (for UFW, /etc/ufw/before.rules) + +Then ssh into each kubenode and make the following configuration: +``` +sudo iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 31773 +sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 31772 +``` + +##### Mirroring the public IP + +cert-manager has a requirement on being able to reach the kubernetes on it's external IP. this is trouble, because in most security concious environments, the external IP is not owned by any of the kubernetes hosts. + +on an IP Masquerading router, you can redirect outgoing traffic from your cluster, that is to say, when the cluster asks to connect to your external IP, you can instead choose to send it to a kubernetes node inside of the cluster. +``` +sudo iptables -t nat -A PREROUTING -i $INTERNALINTERFACE -d $PUBLICIPADDRESS -p tcp -m multiport --dports 80,443 -j DNAT --to-destination $KUBENODE1IP +``` + +### Acquiring / Deploying SSL Certificates: + +SSL certificates are required by the nginx-ingress-services helm chart. You can either register and provide your own, or use cert-manager to request certificates from LetsEncrypt. + +##### Prepare to deploy nginx-ingress-services + +Move the example values for `nginx-ingress-services`: ``` mv ./values/nginx-ingress-services/{prod-values.example.yaml,values.yaml} mv ./values/nginx-ingress-services/{prod-secrets.example.yaml,secrets.yaml} ``` +#### Bring your own certificates + +if you generated your SSL certificates yourself, there are two ways to give these to wire: + +##### From the command line +if you have the certificate and it's corresponding key available on the filesystem, copy them into the root of the Wire-Server directory, and: + +``` +d helm install nginx-ingress-services ./charts/nginx-ingress-services --values ./values/nginx-ingress-services/values.yaml --set-file secrets.tlsWildcardCert=certificate.pem --set-file secrets.tlsWildcardKey=key.pem +``` + +Do not try to use paths to refer to the certificates, as the 'd' command messes with file paths outside of Wire-Server. + +##### In your nginx config Change the domains in `values.yaml` to your domain. And add your wildcard or SAN certificate that is valid for all these domains to the `secrets.yaml` file. - -Now install the ingress: +Now install the service with helm: ``` -d helm install nginx-ingress-services ./charts/nginx-ingress-services --values ./values/nginx-ingress-services/values.yaml --values ./values/nginx-ingress-services/secrets.yaml +d helm install nginx-ingress-services ./charts/nginx-ingress-services --values ./values/nginx-ingress-services/values.yaml --values ./values/nginx-ingress-services/secrets.yaml ``` +#### Use letsencrypt generated certificates +UNDER CONSTRUCTION: +``` +d kubectl create namespace cert-manager-ns +d helm upgrade --install -n cert-manager-ns --set 'installCRDs=true' cert-manager charts/cert-manager +d helm upgrade --install nginx-ingress-services charts/nginx-ingress-services -f values/nginx-ingress-services/values.yaml +``` -### Installing sftd +## Installing sftd For full docs with details and explanations please see https://github.com/wireapp/wire-server-deploy/blob/d7a089c1563089d9842aa0e6be4a99f6340985f2/charts/sftd/README.md @@ -336,7 +477,7 @@ kubenode3 node_labels="{'wire.com/role': 'sftd'}" node_annotations="{'wire.com/e If these values weren't already set earlier in the process you should rerun ansible to set them: ``` -d ansible-playbook -i ./ansible/inventory/offline ansible/kubernetes.yml --skip-tags bootstrap-os,preinstall,container-engine +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/kubernetes.yml --skip-tags bootstrap-os,preinstall,container-engine ``` If you are restricting SFT to certain nodes, use `nodeSelector` to run on specific nodes (**replacing the example.com domains with yours**): diff --git a/offline/kvm-hetzner.md b/offline/kvm-hetzner.md new file mode 100644 index 000000000..b88f8dec7 --- /dev/null +++ b/offline/kvm-hetzner.md @@ -0,0 +1,360 @@ +# Scope + +This document gives exact instructions for performing an offline installation of Wire on a single VM from Hetzner. it uses the KVM virtual machine system to create all of the required virtual machines. + +This document also gives instructions for creating a TURN calling server on a separate VM. + +## create an SSH key pair. + + +## use the hetzner robot console to create a new server. + +select ubuntu 18.04 or ubuntu 20.04 on an ax101 dedicated server. + +returned IP: 65.21.197.76 + +## Create demo user. + +### log in as root. + +``` +ssh -i ~/.ssh/id_ed25519 root@65.21.197.76 -o serveraliveinterval=60 +``` + +### update OS +When prompted about the ssh config, just accept the maintainer's version. +``` +apt update +apt upgrade -y +``` + +### create our 'demo' user +``` +adduser --disabled-password --gecos "" demo +``` + +### copy ssh key to demo user + +``` +mkdir ~demo/.ssh +cp ~/.ssh/authorized_keys /home/demo/.ssh/ +chown demo.demo ~demo/.ssh/ +chown demo.demo ~demo/.ssh/authorized_keys +``` + +### add a configuration for demo not to need a password in order to SUDO. + +``` +echo "demo ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/10-demo_user +chmod 440 /etc/sudoers.d/10-demo_user +``` + +## ssh in as demo user. +on the remote machine: +``` +logout +``` +on the local machine: +``` +ssh -i ~/.ssh/id_ed25519 demo@65.21.197.76 -o serveraliveinterval=60 +``` + +### use the demo user to reboot to apply security patches +This step ensures sudo is working, before you reboot the machine. +``` +sudo reboot +``` + +## ssh in as demo user. +``` +ssh -i ~/.ssh/id_ed25519 demo@65.21.197.76 -o serveraliveinterval=60 +``` + +### Install screen +``` +sudo apt install screen +``` + +### Start a screen session +``` +screen +``` + +### download offline artifact. +``` +wget https://s3-eu-west-1.amazonaws.com/public.wire.com/artifacts/wire-server-deploy-static-03fad4ff6d9a67eb56668fb259a0c1571cabcac4.tgz +``` + +### extract offline artifact. + +``` +mkdir Wire-Server +cd Wire-Server +tar -xzf ../wire-server-deploy-static-*.tgz +``` + +### extract debian archive +``` +tar -xf debs.tar +``` + +### (FIXME: add iptables to the repo) Install Docker from debian archive. +``` +sudo apt install iptables +sudo dpkg -i debs/public/pool/main/d/docker-ce/docker-ce-cli_*.deb +sudo dpkg -i debs/public/pool/main/c/containerd.io/containerd.io_*.deb +sudo dpkg -i debs/public/pool/main/d/docker-ce/docker-ce_*.deb +sudo dpkg --configure -a +``` + +### (missing) point host OS to debian archive + +### (rewrite) Install networking tools +We're going to install dnsmasq in order to provide DNS to virtual machines, and DHCP to virtual machines. networking will be handled by ufw. + +Note that dnsmasq always fails when it installs. the failures (red stuff) is normal. +``` +sudo systemctl disable systemd-resolved +sudo apt install dnsmasq ufw -y +sudo systemctl stop systemd-resolved +``` + +### Tell dnsmasq to provide DNS locally. +``` +sudo bash -c 'echo "listen-address=127.0.0.53" > /etc/dnsmasq.d/00-lo-systemd-resolvconf' +sudo bash -c 'echo "no-resolv" >> /etc/dnsmasq.d/00-lo-systemd-resolvconf' +sudo bash -c 'echo "server=8.8.8.8" >> /etc/dnsmasq.d/00-lo-systemd-resolvconf' +sudo service dnsmasq restart +``` + +### Configure Firewall +``` +sudo ufw allow 22/tcp +sudo ufw allow from 172.16.0.0/24 proto udp to any port 53 +sudo ufw allow in on br0 from any proto udp to any port 67 +sudo ufw enable +``` + +### (temporary) copy helper scripts from wire-server-deploy +``` +sudo apt install git -y +git clone https://github.com/wireapp/wire-server-deploy.git +cd wire-server-deploy +git checkout kvm_support +cd .. +cp -a wire-server-deploy/kvmhelpers/ ./ +cp -a wire-server-deploy/bin/newvm.sh ./bin +cp -a wire-server-deploy/ansible/setup-offline-sources.sh ./ansible +chmod 550 ./bin/newvm.sh +``` + +### (rewrite) install qemu-kvm +KVM is the virtualization system we're using. +``` +sudo apt install qemu-kvm qemu-utils -y +``` + +#### Ubuntu 18 +If you are using ubuntu 18, you have to install the sgabios package: +``` +sude apt install sgabios -y +``` + +### add the demo user to the kvm group +``` +sudo usermod -a -G kvm demo +``` + +### log out, log back in, and return to Wire-Server. +``` +logout +``` + +``` +ssh -i ~/.ssh/id_ed25519 demo@65.21.197.76 -o serveraliveinterval=60 +cd Wire-Server/ +``` + +### install bridge-utils +So that we can manage the virtual network. +``` +sudo apt install bridge-utils -y +``` + +### (personal) install emacs +``` +sudo apt install emacs-nox -y +``` + +### tell DnsMasq to provide DHCP to our KVM VMs. +``` +sudo bash -c 'echo "listen-address=172.16.0.1" > /etc/dnsmasq.d/10-br0-dhcp' +sudo bash -c 'echo "dhcp-range=172.16.0.2,172.16.0.127,10m" >> /etc/dnsmasq.d/10-br0-dhcp' +sudo service dnsmasq restart +``` + +### enable ip forwarding. +``` +sudo sed -i "s/.*net.ipv4.ip_forward.*/net.ipv4.ip_forward=1/" /etc/sysctl.conf +sudo sysctl -p +``` + +### enable network masquerading +Here, you should check the ethernet interface name for your outbound IP. + +``` +ip ro | sed -n "/default/s/.* dev \([enps0-9]*\) .*/OUTBOUNDINTERFACE=\1/p" +``` +This will return a shell command setting a variable to your default interface. copy and paste it, then run the following + +``` +sudo sed -i 's/.*DEFAULT_FORWARD_POLICY=.*/DEFAULT_FORWARD_POLICY="ACCEPT"/' /etc/default/ufw +sudo sed -i "1i *nat\n:POSTROUTING ACCEPT [0:0]\n-A POSTROUTING -s 172.16.0.0/24 -o $OUTBOUNDINTERFACE -j MASQUERADE\nCOMMIT" /etc/ufw/before.rules +sudo service ufw restart +``` + +### add static IPs for VMs. +``` +sudo bash -c 'echo "dhcp-host=assethost,172.16.0.128,10h" > /etc/dnsmasq.d/20-hosts' +sudo bash -c 'echo "dhcp-host=kubenode1,172.16.0.129,10h" >> /etc/dnsmasq.d/20-hosts' +sudo bash -c 'echo "dhcp-host=kubenode2,172.16.0.130,10h" >> /etc/dnsmasq.d/20-hosts' +sudo bash -c 'echo "dhcp-host=kubenode3,172.16.0.131,10h" >> /etc/dnsmasq.d/20-hosts' +sudo bash -c 'echo "dhcp-host=ansnode1,172.16.0.132,10h" >> /etc/dnsmasq.d/20-hosts' +sudo bash -c 'echo "dhcp-host=ansnode2,172.16.0.133,10h" >> /etc/dnsmasq.d/20-hosts' +sudo bash -c 'echo "dhcp-host=ansnode3,172.16.0.134,10h" >> /etc/dnsmasq.d/20-hosts' +sudo service dnsmasq restart +``` + +### Acquire ubuntu 18.04 server installation CD (netboot). +For the purposes of our text-only demo, we are going to use one of the netboot ISOs. this allows us to control the install from an SSH prompt. +``` +curl http://archive.ubuntu.com/ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso -o ubuntu.iso +``` + +### create assethost +``` +./bin/newvm.sh -d 40 -m 1024 -c 1 assethost +``` + +### create kubenode1 +``` +./bin/newvm.sh -d 80 -m 8192 -c 6 kubenode1 +``` + +### create kubenode2 +``` +./bin/newvm.sh -d 80 -m 8192 -c 6 kubenode2 +``` + +### create kubenode3 +``` +./bin/newvm.sh -d 80 -m 8192 -c 6 kubenode3 +``` + +### create ansnode1 +``` +./bin/newvm.sh -d 80 -m 8192 -c 6 ansnode1 +``` + +### create ansnode2 +``` +./bin/newvm.sh -d 80 -m 8192 -c 6 ansnode2 +``` + +### create ansnode3 +``` +./bin/newvm.sh -d 80 -m 8192 -c 6 ansnode3 +``` + +### Start a node +Specify NOREBOOT, so the VM powers off after the install. +``` +cd +NOREBOOT=1 ./start_kvm.sh +``` + +when qemu starts (you see H Peter Anvin's name), hit escape. +at the " oot:" prompt, type 'expert console=ttyS0', and hit enter. + +### install node +select 'choose language' + * english + * united states + * united states + * no additional. +select 'Detect network hardware' + * select 'Continue' to let it install usb-storage. +select 'Configure the network' + * no, no vlan trunking. + * yes, Auto-configure networking. + * hit 'Continue' to select the (default) 3 seconds to detect a link. + * supply the hostname. + * for the assethost, type assethost + * for the first kubernenes node, type 'kubenode1'. + * ... etc + * supply the domain name + * domain name: fake.domain +Select "Choose a mirror of the ubuntu archive" + * select http + * select united states + * select us.archive.ubuntu.com + * select 'Continue' for no http proxy information +select "Download installer components" + * select no components, hit "Continue" +select "Set up Users and Passwords" + * enable shadow passwords + * do not allow root login. + * full name: demo + * username: demo + * password: (given by julia, same for all VMs) + * yes, use a weak password. + * do not encrypt home directory. +select 'configure the clock' + * yes, set the clock using NTP + * yes, ntp.ubuntu.com + * yes, a berlin timezone is correct. +select 'detect disks' +select 'partition disks' + * guided, use entire disk and set up LVM. + * pick the only option they give you for disks. + * select 'All files in one partition' + * yes, write the changes to disk. + * accept the default volume group name "-vg" + * select 'Continue' to consume the entire disk. + * yes, write the changes to disk. +select 'Install the base system' + * install the 'linux generic' kernel. + * chose 'generic' to install all of the available drivers. +select 'Configure the package manager' + * Use restricted software? Yes + * Use software from the "Universe" component? yes + * Use software from the "Multiverse" component? yes + * Use backported software? yes + * Use software from the "Partner" repository? no + * enable source repositories? No. + * Select continue to use security archive. +select 'Select and install software' + * select "Install security updates automatically" + * select "OpenSSH Server", and hit continue. +select "Install the GRUB bootloader on a first disk" + * install the GRUB bootloader to the master boot record? yes. + * select only device displayed (/dev/sda). + * no to installing Extra EFI just-in-case. +select "Finish the installation" + * yes, the clock is set to UTC + * select continue to reboot. + +### first boot + * run "DRIVE=c ./start_kvm.sh" + * hit escape if you want to see the boot menu. + + +### From this point: + +switch to docs.md. + +skip to the step where we source the offline environment. + +when editing the inventory, create 'ansnode' entries, rather than separate cassandra, elasticsearch, and minio nodes. + + diff --git a/offline/upgrading.md b/offline/upgrading.md index ef8532f42..d7f6d6242 100644 --- a/offline/upgrading.md +++ b/offline/upgrading.md @@ -5,7 +5,8 @@ We have a pipeline in `wire-server-deploy` producing container images, static bi Create a fresh workspace to download the new artifacts: ``` -$ cd ... # you pick a good location! +$ mkdir ... # you pick a good location! +$ cd ... ``` Obtain the latest airgrap artifact for wire-server-deploy. Please contact us to get it for now. We are @@ -27,36 +28,38 @@ sudo apt clean ### Kubernetes hosts: -#### Wire +#### Wire Cluster Remove wire-server images from two releases ago, or from the current release that we know are unused. For instance, ``` sudo docker image ls +# look at the output of the last command, to find VERSION="2.106.0" -sudo docker image ls | grep -E "^quay.io/wire/" | grep $VERSION | sed "s/.*[ ]*\([0-9a-f]\{12\}\).*/sudo docker image rm \1/" +sudo docker image ls | grep -E "^quay.io/wire/([bcg]|spar|nginz)" | grep $VERSION | sed "s/.*[ ]*\([0-9a-f]\{12\}\).*/sudo docker image rm \1/" ``` -If you are not running SFT in your main cluster (for example, do not use SFT, or have SFT in a separate DMZ'd cluster).. then remove SFT images from the Wire Kubernetes. +If you are not running SFT in your main cluster (for example, do not use SFT, or have SFT in a separate DMZ'd cluster).. then remove SFT images from the Wire Kubernetes cluster. ``` sudo docker image ls | grep -E "^quay.io/wire/sftd" | sed "s/.*[ ]*\([0-9a-f]\{12\}\).*/sudo docker image rm \1/" ``` -#### SFT +#### SFT Cluster If you are running a DMZ deployment, prune the old wire-server images and their dependencies on the SFT kubernetes hosts... ``` sudo docker image ls | grep -E "^quay.io/wire/(team-settings|account|webapp|namshi-smtp)" | sed "s/.*[ ]*\([0-9a-f]\{12\}\).*/sudo docker image rm \1/" sudo docker image ls | grep -E "^(bitnami/redis|airdock/fake-sqs|localstack/localstack)" | sed "s/.*[ ]*\([0-9a-f]\{12\}\).*/sudo docker image rm \1/" -sudo docker image rm ``` ## Preparing for deployment Verify you have the container images and configuration for the version of wire you are currently running. -Extract the latest airgap artifact into your workspace: +Extract the latest airgap artifact into a NEW workspace: ``` $ wget https://s3-eu-west-1.amazonaws.com/public.wire.com/artifacts/wire-server-deploy-static-.tgz +$ mkdir New-Wire-Server +$ cd New-Wire-Server $ tar xvzf wire-server-deploy-static-.tgz ``` Where the HASH above is the hash of your deployment artifact, given to you by Wire, or acquired by looking at the above build job. @@ -130,9 +133,11 @@ The following is a list of important artifacts which are provided: Copy `ansible/inventory/offline/99-static` to `ansible/inventory/offline/hosts.ini`. Compare the inventory from your old install to the inventory of your new install. +``` +diff -u ..//ansible/inventory/offline/99-static ansible/inventory/offline/hosts.ini +``` -Here you will describe the topology of your offline deploy. There are instructions in the comments on how to set -everything up. You can also refer to extra information here. +Here you will describe the topology of your offline deploy. There are instructions in the comments on how to set everything up. You can also refer to extra information here. https://docs.wire.com/how-to/install/ansible-VMs.html ### updates to the inventory @@ -146,17 +151,17 @@ restund_uid = root minio_deeplink_prefix = domainname.com minio_deeplink_domain = prefix- -# move the kubeconfig +# migrate the kubeconfig Old versions of the package contained the kubeconfig at ansible/kubeconfig. newer ones create a directory at ansible/inventory/offline/artifacts, and place the kubeconfig there, as 'admin.conf' If your deployment package uses the old style, then in the place where you are keeping your new package: ``` mkdir ansible/inventory/offline/artifacts -cp ..//ansible/kubeconfig ansible/inventory/offline/artifacts/admin.conf ``` -otherwise: +Otherwise: ``` mkdir ansible/inventory/offline/artifacts sudo cp ..//ansible/inventory/offline/artifacts/admin.conf ansible/inventory/offline/artifacts/admin.conf @@ -164,18 +169,50 @@ sudo cp ..//ansible/inventory/offline/artifacts/admin.conf ansi ## Preparing to upgrade kubernetes services -Log into the assethost, and verify the 'serve-assets' systemd component is running by looking at `netstat -an`, and checking for `8080`. If it's not: +Log into the assethost, and verify the 'serve-assets' systemd component is running by looking at `sudo lsof -i -P -n | grep LISTEN`, and checking for `8080`. If it's not: ``` sudo service serve-assets start ``` +### WORKAROUND: old debian key +All of our debian archives up to version 4.12.0 used a now-outdated debian repository signature. Some modifications are required to be able to install everything properly. + +First, gather a copy of the 'setup-offline-sources.yml' file from: https://raw.githubusercontent.com/wireapp/wire-server-deploy/kvm_support/ansible/setup-offline-sources.yml . +``` +wget https://raw.githubusercontent.com/wireapp/wire-server-deploy/kvm_support/ansible/setup-offline-sources.yml +``` +copy it into the ansible/ directory: +``` +cp ansible/setup-offline-sources.yml ansible/setup-offline-sources.yml.backup +cp setup-offline-sources.yml ansible/ +``` + +Open it with your prefered text editor and edit the following: +* find a big block of comments and uncomment everything in it `- name: trust everything...` +* after the block you will find `- name: Register offline repo key...`. Comment out that segment (do not comment out the part with `- name: Register offline repo`!) + +Then disable checking for outdated signatures by editing the following file: +``` +ansible/roles/external/kubespray/roles/container-engine/docker/tasks/main.yml +``` +* comment out the block with -name: ensure docker-ce repository public key is installed... +* comment out the next block -name: ensure docker-ce repository is enabled + +Now you are ready to start deploying services. + +#### WORKAROUND: dependency +some ubuntu systems do not have GPG by default. wire assumes this is already present. ensure you have gpg installed on all of your nodes before continuing to the next step. + +#### Populate the assethost, and prepare to install images from it. Since docker is already installed on all nodes that need it, push the new container images to the assethost, and seed all container images: ``` -d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/setup-offline-sources.yml --tags "containers-helm" +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/setup-offline-sources.yml d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/seed-offline-docker.yml ``` +#### Ensuring kubernetes is healthy. + Ensure the cluster is healthy. use kubectl to check the node health: ``` @@ -249,11 +286,11 @@ cp ..//values/demo-smtp/values.yaml values/demo-smtp/values.yam d helm upgrade demo-smtp ./charts/demo-smtp/ --values ./values/demo-smtp/values.yaml ``` -#### Upgrading the NginX Ingress +#### Upgrading the demo SMTP service Compare your demo-smtp configuration files, and decide whether you need to change them or not. ``` -diff -u ..//values/ngin-ingress-services/values.yaml values/nginx-ingress-services/prod-values.example.yaml +diff -u ..//values/nginx-ingress-services/values.yaml values/nginx-ingress-services/prod-values.example.yaml ``` If there are no differences, copy these files into your new tree. @@ -261,8 +298,11 @@ If there are no differences, copy these files into your new tree. cp ..//values/nginx-ingress-services/values.yaml values/nginx-ingress-services/values.yaml ``` +#### Upgrading nginx-ingress-controller + +Re-deploy your ingress, to direct traffic into your cluster with the new version of nginx. +``` d helm upgrade nginx-ingress-controller ./charts/nginx-ingress-controller/ -d helm upgrade nginx-ingress-services ./charts/nginx-ingress-services/ --values ./values/nginx-ingress-services/values.yaml --values ./values/nginx-ingress-services/secrets.yaml ``` ### Upgrading Wire itsself @@ -275,9 +315,43 @@ Now upgrade `wire-server`: d helm upgrade wire-server ./charts/wire-server/ --timeout=15m0s --values ./values/wire-server/values.yaml --values ./values/wire-server/secrets.yaml ``` +#### Bring your own certificates + +If you generated your own SSL certificates, there are two ways to give these to wire: + +##### From the command line +if you have the certificate and it's corresponding key available on the filesystem, copy them into the root of the Wire-Server directory, and: + +``` +d helm install nginx-ingress-services ./charts/nginx-ingress-services --values ./values/nginx-ingress-services/values.yaml --set-file secrets.tlsWildcardCert=certificate.pem --set-file secrets.tlsWildcardKey=key.pem +``` + +Do not try to use paths to refer to the certificates, as the 'd' command messes with file paths outside of Wire-Server. + +##### In your nginx config +This is the more error prone process, due to having to edit yaml files. + +Change the domains in `values.yaml` to your domain. And add your wildcard or SAN certificate that is valid for all these +domains to the `secrets.yaml` file. + +Now install the service with helm: +``` +d helm install nginx-ingress-services ./charts/nginx-ingress-services --values ./values/nginx-ingress-services/values.yaml --values ./values/nginx-ingress-services/secrets.yaml +``` + +#### Use letsencrypt generated certificates + +UNDER CONSTRUCTION: +If your machine has internet access to letsencrypt's servers, you can configure cert-manager to generate certificates, and load them for you. +``` +d kubectl create namespace cert-manager-ns +d helm upgrade --install -n cert-manager-ns --set 'installCRDs=true' cert-manager charts/cert-manager +d helm upgrade --install nginx-ingress-services charts/nginx-ingress-services -f values/nginx-ingress-services/values.yaml +``` + ### Marking kubenode for calling server (SFT) -The SFT Calling server should be running on a kubernetes nodes that are connected to the public internet. +The SFT Calling server should be running on a set of kubernetes nodes that have traffic directed to them from the public internet. If not all kubernetes nodes match these criteria, you should specifically label the nodes that do match these criteria, so that we're sure SFT is deployed correctly.