Skip to content

Installation steps to deploy OpenConext on a single system other than the Vagrant VM centOS7

Thijs Kinkhorst edited this page Jul 12, 2023 · 40 revisions

Introduction

This document describes how to deploy the OpenConext platform to a host of your own choice. The deployment is done using Ansible, which is run from your local workstation. The host should be provisioned with either Redhat Enterprise 7 or CentOS 7 and needs at least 6 GB of RAM.

Please make sure you have these characteristics of your setup available before you start:

  • target_ip = The ip address or hostname of your target machine, e.g. 192.168.66.100
  • environment = A unique name you are deploying for, e.g. ocdemo, test, qa, acc, prod
  • domain = The domain domain part of your deployment, for example "openconext.org"

Installation steps

  • Prepare your machine
  • Install Ansible
  • Clone Openconext-deploy repository
  • Adapt Ansible hosts hosts-file + setup connectivity to Ansible-target.
  • Create new certificates and passwords
  • Prepare run-books etc
  • Deploy to target
  • Finalize

Prepare your machine

Make sure you have a fresh install of CentOS7, fully updated. Make sure that the firewall is allowing traffic to port 22 (SSH) and 443 (HTTPS) from your local workstation.

Install Ansible

A standard ansible installation must be performed on your local workstation.

Clone Openconext-deploy repository

On your local workstation:

git clone https://github.com/OpenConext/OpenConext-deploy.git

Setup connectivity to the destination host

Make sure you can connect via ssh from the Ansible host to the Ansible target. Preferably using ssh-keys.

Create a new environment

A script is available to provision a new environment. It will create the a new environment directory under environments/ and it will create all necessary passwords and (self-signed) certificates

Replace with the name of the target. Replace with the domain of the target.

./prep-env <environment> <domain>

Prepare host_vars

Run

cp environments-external/<environment>/host_vars/template.yml environments-external/<environment>/host_vars/<target_ip>.yml

(where <target_ip> is the ip address or hostname of your target machine, whatever is set in your inventory file)

Change in environments-external/<environment>/inventory:

  • change all references from %target_host% to <target_ip>

Prepare for external connectivity

When you install OpenConext, it adds a loadbalancer in front of the applications, and it listens on two ip addresses: One IP address that should be restricted from the internet, and one public ip address. The restricted IP address is meant for internal API's and management applications, the public ip address is for public applications, like Engineblock. It's recommended that your VM has two public ip addresses for this purpose. You need to configure those ip addresses in the group_vars file. Please change the configured 127.0.0.1 and 127.0.0.2 address in the group_vars to your public ip addresses:

Change in environments-external/<environment>/group_vars/<environment>.yml:

haproxy_sni_ip.ipv4
haproxy_sni_ip_restricted.ipv4

If you have ipv6 addresses, you can change the ipv6 equivalents as well.

If you don't have the luxury of two public ip addresses, you can leave the haproxy_sni_ip_restricted.ipv4 as it is, and only use configure haproxy_sni_ip.ipv4. You then need to make sure that the applications are move to the unrestricted ip address. You need to configure the following: Change in environments-external/<environment>/group_vars/<environment>.yml: Look at:

haproxy_applications:

Remove all lines that have:

restricted: yes

If you use a public domain, you can add the following entries to the DNS: Public unrestricted ip address:

db.<domain>
engine.<domain>
mujina-idp.<domain>
mujina-sp.<domain>
connect.<domain>
profile.<domain>
static.<domain>
teams.<domain>
voot.<domain>

Public restricted ip address:

aa.<domain>
pdp.<domain>
engine-api.<domain>
manage.<domain>

If you want to use your hosts file edit /etc/hosts on target and the machine from which the target is accessed:

<ip-address_restricted> aa.<domain> engine-api.<domain>  manage.<domain> pdp.<domain> 
<ip-address_unrestricted>  db.<domain>
<ip-address_unrestricted>  engine.<domain> mujina-idp.<domain> mujina-sp.<domain> connect.<domain>
<ip-address_unrestricted> static.<domain> teams.<domain> voot.<domain> profile.<domain> 

Deploy to target

Run the provision script. It's a wrapper around ansible-playbook, and takes the following three arguments:

provision name_of_the_environment remote_username location_of_your_secrets

This becomes:

./provision <environment> <remoteusername> environments-external/<environment>/secrets/<environment>.yml   

You can also run a minimal install which will only install these apps:

  • Engineblock (the SAML proxy itself)
  • Manage: The application that you can use to manage all the SAML entities
  • Profile: An user profile application
  • Mujina-IdP: A mock IdP that you can use to test (you can use any username and password you'd like)

To do so add the commandline parameter --tags core, like so:

./provision <environment> <remoteusername> environments-external/<environment>/secrets/<environment>.yml --tags core

You can also use our Docker image for this purpose, please check the related wiki page for more information

Finalize

After 1st run you can remove the role to provision all entities to Engineblock. Make sure you have the file playbook.yml present in your environment directory (so environments-external/<environment/playbook.yml) with these contents:

---
- hosts: all

Please check your firewall settings (iptabels -L) as these were not modified by the install. Access to port 443 is required to use the platform. If your setup did (yet) not allow access to port 443, you may need to restart the Shibboleth service to allow it to fetch metadata from the https://engine.<target>.<domain>

Reboot the target system to make sure all changes has been activated. You should now be able to connect to https://engine.<domain> to see Engineblock's front page, or https://manage.<domain> to access the management interface.

Adding new https certificates to your already installed OC box

Adding a new certificate to the OC box is not too difficult. Even though OC deploys several services, https is terminated at the HAproxy loadbalancers. We only need to update the certificates there.

  • Get yourself a new certificate with either *.yourdomain.org or with all the names in use for the OC services as subject altnames. Having separate certificates on a per service basis is not recommended for a VM setup
  • Make sure you have 1 file containing both the certificate as well as the (intermediate) CA certificate(s), in that order.
  • Copy the file with certificate and CA to the files directory in your environment, environments-external/<environment>/files/certs. Make sure the cert is named star.<environment>.pem
  • Copy the key of your certificate to the secret file in your environment, environments/<environment>/secrets/<environment>.yml, by replacing the value of the https_star_private_key key. Take care to maintain the indentation of 2 spaces or your yml file will no longer be correctly formatted!
  • Rerun the Ansible deployment scripts for the loadbalancer only, by adding "--tags lb" to the provision script