Skip to content

Latest commit

 

History

History
287 lines (210 loc) · 9.62 KB

README.md

File metadata and controls

287 lines (210 loc) · 9.62 KB

Introduction

This repository contains a collection of Ansible playbooks to help to install Red Hat OpenShift Container Platform 4 on VMware using the UPI method. It currently supports a connected/disconnected environment. No DHCP/PXE is required.

Playbooks

Name Description
network_check This checks the network, dns and various connectivity of the installation environment.
bastion_setup This prepare the bastion server for OCP installation, including seting up a haproxy or mirrored registry.
lb_check_setup This installs new vms for the load balancer check.
lb_check This will run checks against the load balancer to ensure it is configured with the correct backends and SSL passthrough is correct. No connection to the load balancer is required.
create_iso This creates a boot iso for each node.
ocp_setup This creates the installer and boot each vm with the iso.
destroy This destroys the OCP vms, excluding the bastion.
remove_cdrom This ejects the CDROM from the OCP nodes.
registry_setup This helps to setup a local registry to mirror images

Prerequisite Setup

Preparing a RHEL 7 template for bastion and load balancer check

# /bin/sed -i '/^(HWADDR|UUID)=/d' /etc/sysconfig/network-scripts/ifcfg-e*
# yum install -y rsync perl open-vm-tools
# systemctl enable vmtoolsd

Then export this vm into a VMware OVA file.

Bringing your own repos for a disconnected install

You will need to bring entire directory /root/repos to the target environment.

Pip

# yum localinstall -y https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/p/python2-pip-8.1.2-12.el7.noarch.rpm
# mkdir -p /root/repos/pip
# (cd /root/repos/pip && pip download passlib pyvmomi bcrypt dnspython netaddr jmespath docker-image-py --no-cache-dir)

To avoild building on the target bastion, you will have to do the following for the regex module:

# yum install -y gcc python-devel
# (cd /tmp && pip download regex && tar xvzf regex*.tar.gz && cd regex* && python setup.py bdist_wheel && cp dist/*.whl /root/repos/pip)

RPMS

If there is no Red Hat Satellite in the environment, you can bring in your own repoistories.

# yum install -y yum-utils
# reposync -n -p /root/repos --repoid rhel-7-server-rpms --repoid rhel-7-server-ansible-2-rpms --repoid rhel-7-server-extras-rpms
# ( cd root/repos && curl -O https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/p/python2-pip-8.1.2-12.el7.noarch.rpm)

OpenShift images for base installation

You can setup a local registry by running the setup_registry.yml playbook. This requires Internet connection and pip installed.

# ansible-playbook --ask-vault-pass setup_registry.yml

After performing an OC mirror. Alternatively you can use this Playbook.

# (cd /opt/registry/data && tar cvzf /root/repos/registry_data.tar.gz .)

If running as a disconnected installation, you will need to extract the openshift-installer after mirroring and copy into /root/repos/sbin.

# cp openshift-install /root/repos/sbin

Export this git repository

# git archive --format=tar.gz --prefix=openshift4-vmware-upi/ master > /root/repos/git.tar.gz

Copy useful binaries

https://stedolan.github.io/jq/download/

https://github.com/vmware/govmomi/releases

govc is minimally required by the playbook.

# mkdir -p /root/repos/sbin
# cp /path/to/jq /root/repos/sbin/jq
# cp /path/to/govc /root/repos/sbin/govc

Prepare registy docker image

# yum install -y podman
# podman pull docker.io/library/registry:2
# podman save docker.io/library/registry:2 -o /root/repos/registry.tar

OpenShift installer

# cd /root/repos
# curl -O https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.2.10/openshift-install-linux-4.2.10.tar.gz
# curl -O https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.2.10/openshift-client-linux-4.2.10.tar.gz
# curl -O https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.2/4.2.0/rhcos-4.2.0-x86_64-metal-bios.raw.gz
# curl -O https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.2/4.2.0/rhcos-4.2.0-x86_64-installer.iso

Setup

Configuring the bastion host

After you have provisioned the bastion host, ensure the fqdn, network and /etc/resolv.conf configurations is correct.

Copy /root/repos into bastion:

# cp -vr /mnt/repos /root/ 

If the system can be registered to Satellite, register and enable the following repos:

# subscription-manager repos --disable=*
# subscription-manager repos --enable=rhel-7-server-rpms
# subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
# subscription-manager repos --enable=rhel-7-server-extras-rpms

Copy binaries:

cp /root/repos/sbin/{govc,jq} /usr/local/sbin/

Bootstrap packages. If there is no Satellite, yum localinstall from the local /root/repos repositories. The playbooks have been tested with Ansible 2.9.

# yum install ansible rhel-system-roles

Install python pip:

# yum localinstall -y /root/repos/python2-pip-8.1.2-10.el7.noarch.rpm

Untar Ansible playbooks

# (cd /root && tar xvzf /root/repos/git.tar.gz)

Configure Ansible inventory

The default inventory file is inventory.yml

Ansible host groups

Name Description
bastion_grp This is the bastion node
apps_lb This defines which is the apps load balancer
master_lb This defines which is the masters load balancer
bootstrap_grp This is the boostrap node
masters_grp This is a group of masters
workers_grp This is a group of workers. All non-masters nodes are part of this group.
infra_routers_grp This is a group of infra routers.

VM specifications

You can define the VM size by specifying the following host vars.

ansible_host: xxx.xxx.xxx.xxx
vm_memory_mb: 7168
vm_cpu: 4            
vm_disks: 
  - size: 80 
    type: thin

Important Ansible variables

Name Description
setup_haproxy Whether to configure haproxy on bastion for apps and masters
setup_registry Whether to configure a registry on bastion. This implies a restricted network installation.
cluster_name OCP cluster name
base_domain OCP base domain name
openshift_cluster_network_cidr OpenShift Cluster network CIDR
openshift_host_prefix OpenShift Host Prefix
openshift_service_network_cidr OpenShift Servie Network CIDR
apps_use_wildcard_dns Whether to check for wildcard DNS
timesync_ntp_servers ntp servers to configure
vm_template RHEL 7 vm template name
yum_repos If there is no Satallite, configure the local yum repos
yum_conf If there is no Satallite, configure yum to point to the local repository
use_vcp Whether to integrate OCP with VMware Clod Provider

Refer to the inventory file for the rest of the variables.

Super secretive vault.yml in the playbook directory

Create a vault with the following vars:

vcenter_hostname: 
vcenter_username: 
vcenter_password: 
vcenter_insecure_ssl: true

vcp_username: 
vcp_password: 

registry_username: openshift
registry_password: password

ocp_pull_secret: # from cloud.redhat.com when doing a connected install

Running the playbooks

To do a network check. This will create a /tmp/dns_check.txt output. It is recommended to run this check first.

# ansible-playbook --ask-vault-pass network_check.yml

After the network check is successful, it is time to setup the bastion host. Depending on the defined variables, it can optionally setup a HAProxy and Registry for you.

# ansible-playbook --ask-vault-pass bastion_setup.yml

We should perform a load balancer check. This creates a /tmp/lb_check.txt output file.

# ansible-playbook --ask-vault-pass lb_check_setup.yml
# ansible-playbook --ask-vault-pass lb_check.yml

After everything is verified to be correct, you can then create the boot isos. This will upload the isos to the datastore defined in the inventory file.

# ansible-playbook --ask-vault-pass create_iso.yml

Once the isos are uploaded, we now can create the OpenShift manifest files and virtual machines. The virtual machines will be created and powered on automatically.

# ansible-playbook --ask-vault-pass ocp_setup.yml

The installation will start and you can continue by following the OpenShift Installation Guide. When bootstrap is complete, you can shutdown the bootstrap node.

After installation is completed, you can eject all the cdroms:

# ansible-playbook --ask-vault-pass remove_cdrom.yml

Clean Up

To destroy all the vms, excluding bastion:

# ansible-playbook --ask-vault-pass destory.yml

Others

Sample dnsmasq

A sample dnsamasq config has been provided.

HAProxy stats page

HAProxy on bastion has been configured with http://hostname:5005/haproxy_stats page.

OpenShift 4 day 2 Ops

Day 2 Ansible Repo

TODO

  • Able to customize VM folder name

  • Check VM name length is <= 80

  • Check for exitence/creation of vm folder early and not at lb check

  • Scale worker playbook