Skip to content

Commit

Permalink
Merge pull request #89 from NSLS-II/deployment-docs
Browse files Browse the repository at this point in the history
Add deployment documentation.
  • Loading branch information
danielballan authored Oct 31, 2018
2 parents 26c1eed + cdbe2e7 commit 1cc2f22
Show file tree
Hide file tree
Showing 13 changed files with 739 additions and 224 deletions.
49 changes: 0 additions & 49 deletions deploying/metadatastore.rst

This file was deleted.

196 changes: 196 additions & 0 deletions source/components/ansible-setup.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
******************************
NSLS-II DAMA Ansible Playbooks
******************************

Historical Note: This repository began as a fork of
jupyterhub/jupyterhub-deploy-teaching. It has grown from JupyterHub playbook to
a general collection of playbooks for DAMA.

Getting Started
===============

These playbooks use your local SSH configuration (see
https://docs.ansible.com/ansible/latest/faq.html#how-do-i-get-ansible-to-reuse-connections-enable-kerberized-ssh-or-have-ansible-pay-attention-to-my-local-ssh-config-file
for more info) to hop into the BNL network and then into the NSLS-II controls
network. Your ``~/.ssh/config`` must include a host named ``ossh``
("outer ssh") configured like this or similar:

.. code-block:: bash
Host bnl
HostName ssh.bnl.gov
ForwardAgent yes
Host ossh
ForwardAgent yes
ProxyCommand ssh -q bnl nc ssh.nsls2.bnl.gov 22
If instead you are connecting to the NSLS-II controls network after VPNing into
the BNL network then your ``~/.ssh/config`` must include a host named ``issh``
("inner ssh") configured like this or similar:

.. code-block:: bash
Host issh
HostName ssh.nsls2.bnl.gov
user awalter
ForwardAgent yes
It is recommended, but not required, that you include both. It is also necesary
to update ``playbooks/group_vars/all`` from:

.. code-block:: bash
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q ossh"'
to:

.. code-block:: bash
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q issh"'
The inventory variables include some encrypted variables, using ansible-vault.
To use any of these playbooks, you will need the vault password, which is stored
in LastPass. As of this writing, Stuart Campbell, Thomas Caswell, and Daniel
Allan have access to it. Stash the password in file named
``vault_password_file`` in the root directory of this repository.

You will also need sudo access on the hosts you want to change if that change
requires privilege escalation.

Organization
============
Inventories
-----------

There are two inventories: staging and production. Currently, the only
staging host is jupyterdev, for testing JupyterHub deployments. There is no
staging host for beamline deployments, but there should be.

Playbooks
---------

As suggested by Ansible best practices(docs.ansible.com/ansible/latest/playbooks_best_practices.html),
there is one playbook per "kind" of machine we deploy to: ``jupyterhub.yml`` and
``beamlines.yml``.

Deploying
=========

A Simple Ping Test to Get Started
---------------------------------

.. code-block:: bash
ansible -i production beamlines -m ping
where ``production`` refer to an inventory file in this repository,
``beamlines`` is a group of hosts in that inventory, and ``ping`` is a built-in
Ansible module.

JupyterHub
----------

The JupyterHub playbook configures a machine to run the JupyterHub process
and single-user notebook server processes.

Deploy it to the staging inventory first, which updates
https://notedev.nsls2.bnl.gov.

.. code-block:: bash
ansible-playbook -i staging jupyterhub.yml -bkK
If all works as expected, update https://notebook.nsls2.bnl.gov:

.. code-block:: bash
ansible-playbook -i production jupyterhub.yml -bkK
Beamlines
---------

The Beamlines playbook installs databroker configuration files and creates
conda environments based on environment files.

.. code-block:: bash
ansible-playbook -i production beamlines.yml -bkK
Changing default environments
-----------------------------

This should only be used in a targeted way, one beamline at a time.

Update the ``current_env_tag`` under the beamline in question in ``production``
(location in the root of the repo).

Use ``--limit=XXX`` to target the playbook to the servers on one beamline, where
``XXX`` may be for example ``04-ID``.

.. code-block:: bash
ansible-playbook -i production update_default_vars.yml --limit=XXX -bkK
Where to Make Changes
*********************

Updating conda
--------------

Note that `conda update -n root conda` is not always sufficient because the root
environment way have an old version of Python no longer supported by conda.
Instead, use `conda install -n root python=3.6 conda``. (I use ``python=3.6``,
the latest version as of this writing, but that should be updated the latest stable
Python.)

.. code-block:: bash
ansible -i production beamlines -a "/opt/conda/bin/conda install -n root python=3.6 conda" -bkK
New hosts
---------

Edit ``production``.

New databroker configuration
----------------------------

Add a file to ``roles/databroker_config/files/`` and deploy the beamline
playbook.

New Environments
----------------

Add a file to ``roles/beamline_envs/files/`` and deploy the beamline
playbook.

New kernels
-----------

Add items to the ``kernelspecs`` section in ``group_vars/jupyterhub_hosts`` and
deploy the ``jupyterhub`` playbook. Additional info is given in the comments in
that file.

Fresh installation
------------------

Add keys

.. code-block:: bash
ssh-copy-id xf18id-ws1
ssh-copy-id xf18id-ws2
Run the beamlines playbook.

.. code-block:: bash
ansible-playbook -i production beamlines.yml --limit=18-ID -bkK
83 changes: 83 additions & 0 deletions source/components/conda.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
*****
Conda
*****

Conda Installation
==================

We provide a system-wide installation of conda, which enables us to provide it
to all users by default and to configure it in certain ways. Note that this is
different from how individuals usually work with conda, installing it into
their home directories (``~/miniconda3``). We strongly advise users *not* to
install conda on beamline machines, but to use the binary provided by us
at ``/opt/conda/bin/conda``.

Conda Server
============

We deploy our own internal conda server, akin to anaconda.org, where we
store copies of our own packages and mirrored copies of external packages. We
only ever mirror "official" packages built by Anaconda. Anything we else need,
we build ourselves.

We configure conda to use our internal server instead of anaconda.org. We
provide copies of our packages on anaconda.org *as well* as a convenience for
users and collaborators, but we do not use anaconda.org for any machines inside
the Controls network.

Conda Configuration Files
=========================

The system-wide installation of conda looks for its configuration file in
``/opt/conda/.condarc``. Users *should not* create their own configuration
files in ``~/.condarc``; that would override the system configuration, and
conda may not work.

A second configuration file located at ``/etc/xdg/binstar/config.yaml`` tells
conda's web client where to look for packages. Again, we aim it at our internal
conda server instead of anaconda.org.

Conda Environments
==================

Conda is configured to look for packages in two places:

* ``/opt/conda_envs/`` (system)
* ``~/conda_envs`` (user)

The system location requires superuser privileges to change, so we use it to
store stable packages deployed and managed by us. Any user can run them, but
users are not allowed to change them. Users who want to create their own custom
conda environments can put them in the user location.

Conda Environment Files
=======================

An environment file fully specifies a conda environment, including the specific
versions and builds of the all the packages and the channels from which they
were obtained. For each software deployment, we create an ``analysis``
environment file and ``collection`` environment file. We store them in version
control in the ``roles/beamline_envs/files`` directory of the
`NSLS-II/playbooks <https://github.com/NSLS-II/playbooks>`_ repository
(private).

Conda Metapackages
==================

A `metapackage <https://conda.io/docs/glossary.html#metapackage>`_
is a conda package that contains only a list of dependencies but not functional
code itself.

We maintain several metapackages:

* ``analysis``, which depends on libraries for data analysis (e.g.
scikit-beam), data acess (e.g. databroker) and simulation (e.g. bluesky)
* ``collection``, which depends on ``analysis``, so it includes a superset of
those dependencies, adding some additional ones only needed for data
acquisition (e.g. nslsii)
* several beamline-specific packages, with names like ``11-id-chx-analysis`` or
``11-id-chx-collection`` that depend on the general ``analysis`` or
``collection`` package and add some beamline-specific requirements

They are located in the ``recipes-tag`` directory of
`NSLS-II/lightsource2-recipes <https://github.com/NSLS-II/lightsource2-recipes>`_.
Loading

0 comments on commit 1cc2f22

Please sign in to comment.