Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation maintenance #248

Merged
merged 12 commits into from
Mar 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion config/nowcast.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1242,7 +1242,7 @@ message registry:
no after_worker function: ERROR - after_worker function not found in next_workers module

# Module from which to load :py:func:`after_<worker_name>` functions
# that provide lists of workers to launch when :kbd:`worker_name` finishes.
# that provide lists of workers to launch when ``worker_name`` finishes.
# Use fully qualified, dotted notation.
next workers module: nowcast.next_workers

Expand Down
2 changes: 1 addition & 1 deletion config/ww3_hindcast.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ message registry:
no after_worker function: ERROR - after_worker function not found in next_workers module

# Module from which to load :py:func:`after_<worker_name>` functions
# that provide lists of workers to launch when :kbd:`worker_name` finishes.
# that provide lists of workers to launch when ``worker_name`` finishes.
# Use fully qualified, dotted notation.
next workers module: nowcast.next_workers

Expand Down
5 changes: 1 addition & 4 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,15 +86,12 @@
"moad_tools.places",
"mpl_toolkits.basemap",
"nemo_cmd",
"nemo_nowcast.fileutils",
"nemo_nowcast.workers",
"nemo_nowcast.workers.clear_checklist",
"nemo_nowcast.workers.rotate_logs",
"netCDF4",
"OPPTools",
"pandas",
"paramiko",
"PyPDF2",
"reshapr",
"salishsea_cmd",
"salishsea_cmd.api",
"salishsea_tools",
Expand Down
52 changes: 26 additions & 26 deletions docs/deployment/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,42 +30,42 @@ The production deployment uses 2 systems:
#. The :py:mod:`nemo_nowcast.message_broker`,
:py:mod:`nemo_nowcast.manager`,
:py:mod:`nemo_nowcast.log_aggregator`,
most of the pre- and post-processing workers run on the :ref:`SalishSeaModelResultsServer`, :kbd:`skookum`, where the deployment is in the :file:`/SalishSeaCast/` directory tree.
most of the pre- and post-processing workers run on the :ref:`SalishSeaModelResultsServer`, :`m``, where the deployment is in the :file:`/SalishSeaCast/` directory tree.

#. The daily
:kbd:`forecast2`
``forecast2``
(preliminary forecast),
:kbd:`nowcast`,
:kbd:`forecast`,
and :kbd:`nowcast-green` NEMO-3.6 model runs are computed on a cluster of virtual machines provided by `Ocean Networks Canada`_ on the Compute Canada `arbutus.cloud`_ cluster.
The shared storage for those VMs is provided by an NFS-mounted volume of ::kbd:`arbutus.cloud` `Ceph object storage`_.
``nowcast``,
``forecast``,
and ``nowcast-green`` NEMO-3.6 model runs are computed on a cluster of virtual machines provided by `Ocean Networks Canada`_ on the Compute Canada `arbutus.cloud`_ cluster.
The shared storage for those VMs is provided by an NFS-mounted volume of ``arbutus.cloud`` `Ceph object storage`_.
The nowcast deployment is in the :file:`/nemoShare/MEOPAR/nowcast-sys/` directory tree.

In April 2017,
daily :kbd:`wwatch3-nowcast`,
daily :kbd:`wwatch3-forecast`,
and :kbd:`wwatch3-forecast2`
daily ``wwatch3-nowcast``,
daily ``wwatch3-forecast``,
and ``wwatch3-forecast2``
(preliminary wave forecast)
WaveWatch III® v5.16 wave model runs for the Strait of Georgia and Juan de Fuca Strait were added to the computations on :kbd:`arbutus.cloud`.
The :kbd:`wwatch3-nowcast` and :kbd:`wwatch3-forecast` runs are executed in sequence after the daily :kbd:`nowcast-green` NEMO-3.6 runs.
The :kbd:`wwatch3-forecast2` runs are executed after the daily :kbd:`forecast2` NEMO-3.6 runs.
WaveWatch III® v5.16 wave model runs for the Strait of Georgia and Juan de Fuca Strait were added to the computations on ``arbutus.cloud``.
The ``wwatch3-nowcast`` and ``wwatch3-forecast`` runs are executed in sequence after the daily ``nowcast-green`` NEMO-3.6 runs.
The ``wwatch3-forecast2`` runs are executed after the daily ``forecast2`` NEMO-3.6 runs.

In January 2018,
daily :kbd:`fvcom-nowcast`
and :kbd:`fvcom-forecast` FVCOM v4.1-beta model runs for Vancouver Harbour and the Lower Fraser River were added to the computations on the :kbd:`arbutus.cloud`.
They are executed in sequence after the daily :kbd:`nowcast` NEMO-3.6 runs.
daily ``fvcom-nowcast``
and ``fvcom-forecast`` FVCOM v4.1-beta model runs for Vancouver Harbour and the Lower Fraser River were added to the computations on the ``arbutus.cloud``.
They are executed in sequence after the daily ``nowcast`` NEMO-3.6 runs.
In January 2019,
the resolution of the Vancouver Harbour and Lower Fraser River FVCOM v4.1-beta model was increased.
Those runs are designated :kbd:`fvcom-nowcast-x2` and :kbd:`fvcom-forecast-x2`.
Those runs are designated ``fvcom-nowcast-x2`` and ``fvcom-forecast-x2``.
In March 2019,
an even higher resolution Vancouver Harbour and Lower Fraser River model configuration was added to the system,
running daily nowcast runs as :kbd:`fvcom-nowcast-r12`.
running daily nowcast runs as ``fvcom-nowcast-r12``.

.. _Ocean Networks Canada: https://www.oceannetworks.ca/
.. _arbutus.cloud: https://docs.alliancecan.ca/wiki/Cloud_resources#Arbutus_cloud
.. _Ceph object storage: https://en.wikipedia.org/wiki/Ceph_(software)

These sections describe the setup of the nowcast system on :kbd:`skookum` and :kbd:`arbutus.cloud`,
These sections describe the setup of the nowcast system on ``skookum`` and ``arbutus.cloud``,
and their operation.

.. toctree::
Expand All @@ -75,20 +75,20 @@ and their operation.
arbutus_cloud
operations

In May 2018 production runs of a :kbd:`nowcast-green` configuration with AGRIF sub-grids for Baynes Sound and Haro Strait were added to the system.
Those runs are executed on a reserved chassis on :kbd:`orcinus`.
The setup on :kbd:`orcinus`,
In May 2018 production runs of a ``nowcast-green`` configuration with AGRIF sub-grids for Baynes Sound and Haro Strait were added to the system.
Those runs are executed on a reserved chassis on ``orcinus``.
The setup on ``orcinus``,
as well are the sub-grid initialization preparation with the NEMO-AGRIF nesting tools are described in:

.. toctree::
:maxdepth: 2

orcinus

In February 2019 we got access to the UBC EOAS :kbd:`optimum` cluster.
In February 2019 we got access to the UBC EOAS ``optimum`` cluster.
We use it primarily for long hindcast runs,
but also some research runs.
The setup on :kbd:`optimum` is described in:
The setup on ``optimum`` is described in:

.. toctree::
:maxdepth: 2
Expand All @@ -100,8 +100,8 @@ See also the `#optimum-cluster`_ Slack channel.
.. _#optimum-cluster: https://salishseacast.slack.com/?redir=%2Farchives%2FC011S7BCWGK

With the update of the production to run the V21-11 model version in January 2024,
we decided to end the daily :kbd:`nowcast-dev` development model runs on :kbd:`salish`.
Development is now generally done in research runs on :kbd:`graham`.
:kbd:`salish` is now mostly used for analysis tasks, post-processing of NEMO model results files
we decided to end the daily ``nowcast-dev`` development model runs on ``salish``.
Development is now generally done in research runs on ``graham``.
``salish`` is now mostly used for analysis tasks, post-processing of NEMO model results files
to produce day-average and month-average dataset files,
and Lagrangian particle tracking analysis with :program:`ariane` and :program:`OceanParcels`.
32 changes: 16 additions & 16 deletions docs/deployment/operations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,30 +76,30 @@ Start it with:
.. _supervisorctl: http://supervisord.org/running.html#running-supervisorctl

See the `supervisorctl`_ docs,
or use the :kbd:`help` command within :command:`supervisorctl` to get information on the available commands.
or use the ``help`` command within :command:`supervisorctl` to get information on the available commands.
A few that are useful:

* :kbd:`avail` to get a list of the processes that :command:`supervisord` is configured to manage
* :kbd:`status` to see their status
* :kbd:`stop` to stop a process;
e.g. :kbd:`stop manager`
* :kbd:`start` to start a stopped process;
e.g. :kbd:`start manager`
* :kbd:`restart` to stop and restart a process;
e.g. :kbd:`restart manager`
* :kbd:`signal hup` to send a :kbd:`HUP` signal to a process,
* ``avail`` to get a list of the processes that :command:`supervisord` is configured to manage
* ``status`` to see their status
* ``stop`` to stop a process;
e.g. ``stop manager``
* ``start`` to start a stopped process;
e.g. ``start manager``
* ``restart`` to stop and restart a process;
e.g. ``restart manager``
* ``signal hup`` to send a ``HUP`` signal to a process,
which will cause it to reload its configuration from the :envvar:`NOWCAST_YAML` file that the process was started with;
e.g. :kbd:`signal hup manager`.
e.g. ``signal hup manager``.
This is the way to communicate nowcast system configuration changes to the long-running processes.
* :kbd:`shutdown` to stop all of the processes and shutdown :command:`supervisord`
* ``shutdown`` to stop all of the processes and shutdown :command:`supervisord`

Use :kbd:`quit` or :kbd:`exit` to exit from :command:`supervisorctl`.
Use ``quit`` or ``exit`` to exit from :command:`supervisorctl`.

`sr_subscribe`_ is the command-line interface for interacting with the `sarracenia client`_ that maintains mirrors of the HRDPS forecast files and rivers hydrometric files from the `ECCC MSC datamart service`_.

.. _sr_subscribe: https://github.com/MetPX/sarracenia/blob/v2_dev/doc/sr_subscribe.1.rst

:command:`sr_subscribe` is run in :kbd:`foreground` mode instead of daemonized so that it can be managed by ::command:`supervisord`.
:command:`sr_subscribe` is run in ``foreground`` mode instead of daemonized so that it can be managed by ::command:`supervisord`.
Use :command:`supervisorctl` to view the :command:`sr_subscribe` log files:\

.. code-block:: bash
Expand All @@ -116,8 +116,8 @@ or
Use :command:`tail -f` to follow the logs to view updates as they occur.


Automatic Deployment of Changes to :kbd:`salishsea-site` App
============================================================
Automatic Deployment of Changes to ``salishsea-site`` App
=========================================================

A `GitHub Actions workflow`_ causes changes to be pulled and updated to :file:`/SalishSeaCast/salishsea-site/` and the app to be restarted via :command:`supervisorctl` whenever changes are pushed to the repo on GitHub.

Expand Down
28 changes: 14 additions & 14 deletions docs/deployment/optimum.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@

.. _OptimumDeployment:

*******************************************
:kbd:`optimum` Deployment for Hindcast Runs
*******************************************
****************************************
``optimum`` Deployment for Hindcast Runs
****************************************

Doug maintains the production deployment on :kbd:`optimum` in the group :kbd:`sallen` directory trees.
Doug maintains the production deployment on ``optimum`` in the group ``sallen`` directory trees.
That means that,
for the purposes of these docs,
the value of :envvar:`HOME` is :file:`/home/sallen/dlatorne`.
Expand All @@ -36,7 +36,7 @@ Add these environment variable definitions to :file:`$HOME/.bash_profile`::
export FORCING=/data/sallen/shared
export PROJECT=/home/sallen/dlatorne

:kbd:`optimum` provides automatically defined environment variables for:
``optimum`` provides automatically defined environment variables for:

:envvar:`ARCHIVEDIR`
for storing semi-permanent input and results. (no backup)
Expand All @@ -51,14 +51,14 @@ Add these environment variable definitions to :file:`$HOME/.bash_profile`::
Module Loads
============

The default module loads to use on :kbd:`optimum` are::
The default module loads to use on ``optimum`` are::

module load OpenMPI/2.1.6/GCC/SYSTEM
module load GIT/2/03.03

Loading of those modules is included in :file:`$HOME/.bashrc`.

There is a :kbd:`Miniconda/3` module available for building Python Conda environments.
There is a ``Miniconda/3`` module available for building Python Conda environments.
Conda environments created with that module loaded are stored in :file:`$HOME/.conda/envs/`.

There is something funky about :program:`REBUILD_NEMO` and the way it uses netCDF that requires a different collection of modules in order to avoid a run-time error about netCDF4 operations on netCDF3 files
Expand Down Expand Up @@ -123,7 +123,7 @@ Clone the following repos into :file:`$PROJECT/SalishSeaCast/hindcast-sys/`:
Build XIOS-2
============

Symlink the XIOS-2 build configuration files for :kbd:`optimum` from the :file:`XIOS-ARCH` repo clone into the :file:`XIOS-2/arch/` directory:
Symlink the XIOS-2 build configuration files for ``optimum`` from the :file:`XIOS-ARCH` repo clone into the :file:`XIOS-2/arch/` directory:

.. code-block:: bash

Expand All @@ -135,10 +135,10 @@ Symlink the XIOS-2 build configuration files for :kbd:`optimum` from the :file:`
Despite many attempts with various combinations of compilers,
OpenMPI library versions,
and netCDF library versions,
the only way found to successfully build XIOS-2 is with the :kbd:`OpenMPI/2.1.6/GCC/SYSTEM` module.
That forces us to use the SVN :kbd:`r1066` checkout version of XIOS-2.
That version is pointed to by both the :kbd:`XIOS-2r1066` and the :kbd:`PROD-hindcast_201905-v3`
(and later :kbd:`PROD-hindcast_*`)
the only way found to successfully build XIOS-2 is with the ``OpenMPI/2.1.6/GCC/SYSTEM`` module.
That forces us to use the SVN ``r1066`` checkout version of XIOS-2.
That version is pointed to by both the ``XIOS-2r1066`` and the ``PROD-hindcast_201905-v3``
(and later ``PROD-hindcast_*``)
Git tags,
so create a branch to checkout the repo at one of those tags:

Expand All @@ -153,7 +153,7 @@ and build XIOS-2 with:
$ cd $PROJECT/SalishSeaCast/hindcast-sys/XIOS-2/
$ ./make_xios --arch GCC_OPTIMUM --netcdf_lib netcdf4_seq --job 8

:kbd:`--netcdf_lib netcdf4_seq` is necessary because the :kbd:`OpenMPI/2.1.6/GCC/SYSTEM` NetCDF libraries are not built for parallel output.
``--netcdf_lib netcdf4_seq`` is necessary because the ``OpenMPI/2.1.6/GCC/SYSTEM`` NetCDF libraries are not built for parallel output.

To clear away all artifacts of a previous build of XIOS-2 use:

Expand Down Expand Up @@ -209,7 +209,7 @@ Build it with:
Install Python Packages
=======================

Load the :kbd:`Miniconda/3` module and create a Conda environment:
Load the ``Miniconda/3`` module and create a Conda environment:

.. code-block:: bash

Expand Down
30 changes: 15 additions & 15 deletions docs/deployment/orcinus.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@

.. _OrcinusDeployment:

*******************************************************
:kbd:`orcinus` Deployment for :kbd:`nowcast-agrif` Runs
*******************************************************
*************************************************
``orcinus`` Deployment for ``nowcast-agrif`` Runs
*************************************************

Create Directory Trees
======================
Expand Down Expand Up @@ -65,7 +65,7 @@ Clone the following repos into :file:`/home/dlatorne/nowcast-agrif-sys/`:
Build XIOS-2
============

Symlink the XIOS-2 build configuration files for :kbd:`orcinus` from the :file:`XIOS-ARCH` repo clone into the :file:`XIOS-2/arch/` directory:
Symlink the XIOS-2 build configuration files for ``orcinus`` from the :file:`XIOS-ARCH` repo clone into the :file:`XIOS-2/arch/` directory:

.. code-block:: bash

Expand All @@ -80,7 +80,7 @@ and build XIOS-2 with:
$ cd /home/dlatorne/nowcast-agrif-sys/XIOS-2
$ ./make_xios --arch X64_ORCINUS --netcdf_lib netcdf4_seq --job 8

:kbd:`--netcdf_lib netcdf4_seq` is necessary because AGRIF does not support parallel NetCDF output.
``--netcdf_lib netcdf4_seq`` is necessary because AGRIF does not support parallel NetCDF output.

To clear away all artifacts of a previous build of XIOS-2 use:

Expand Down Expand Up @@ -157,7 +157,7 @@ Sub-grid Initialization Preparation with Nesting Tools
Build Nesting Tools
-------------------

Clone Michael Dunphies' debugged version of the nesting tools for AGRIF from :file:`NEMO-3.6-code/NEMOGCM/TOOLS/NESTING/` on to :kbd:`salish`:
Clone Michael Dunphies' debugged version of the nesting tools for AGRIF from :file:`NEMO-3.6-code/NEMOGCM/TOOLS/NESTING/` on to ``salish``:

.. code-block:: bash

Expand Down Expand Up @@ -197,7 +197,7 @@ Coordinates
For the Baynes Sound sub-grid,
use :program:`agrif_create_coordinates.exe` to create the sub-grid coordinates file from the full domain coordinates
(path provided in the :file:`namelist.nesting.BaynesSound` file),
and add it to the :kbd:`grid` repo:
and add it to the ``grid`` repo:

.. code-block:: bash

Expand Down Expand Up @@ -231,7 +231,7 @@ Bathymetry

.. note::
Need to understand the details of how sub-grid bathymetries are generated.
They appear to be based on :file:`/home/mdunphy/MEOPAR/WORK/Bathy-201702/BC3/BC3_For_Nesting_Tools.nc` and a :kbd:`bathymetry` namelist like:
They appear to be based on :file:`/home/mdunphy/MEOPAR/WORK/Bathy-201702/BC3/BC3_For_Nesting_Tools.nc` and a ``bathymetry`` namelist like:

.. code-block:: bash

Expand Down Expand Up @@ -260,7 +260,7 @@ we can construct an acceptable rivers biological tracers forcing file for the Ba
This will have to be revisited if/when we change the Puntledge River to use real-time discharges values from a gauge.

Calculate the :file:`rivers-climatology/bio/subgrids/BaynesSound/bio/rivers_bio_tracers_mean.nc`,
and add it to the :kbd:`rivers-climatology` repo:
and add it to the ``rivers-climatology`` repo:

.. code-block:: bash

Expand All @@ -281,7 +281,7 @@ The commands in this section are for generation of sub-grid physics restart file

For the Baynes Sound sub-grid,
use :program:`agrif_create_restart.exe` to create the sub-grid physics restart file from the full domain physics restart file,
and upload both files to the appropriate run results directory on :kbd:`orcinus`:
and upload both files to the appropriate run results directory on :`s``:

.. code-block:: bash

Expand Down Expand Up @@ -316,7 +316,7 @@ The commands in this section are for generation of sub-grid tracer restart files

For the Baynes Sound sub-grid,
use :program:`agrif_create_restart_trc.exe` to create the sub-grid tracer restart file from the full domain tracer restart file,
and upload both files to the appropriate run results directory on :kbd:`orcinus`:
and upload both files to the appropriate run results directory on :`s``:

.. code-block:: bash

Expand All @@ -339,10 +339,10 @@ start by using :program:`agrif_create_restart_trc.exe` to create the sub-grid tr
$ /data/dlatorne/MEOPAR/NestingTools/NEMOGCM/TOOLS/NESTING/agrif_create_restart_trc.exe \
namelist.nesting.HaroStrait

For some reason :program:`agrif_create_restart_trc.exe` fails to store the variable :kbd:`TRBTRA`
(the Fraser River tracer :kbd:`B` field, and the final variable)
For some reason :program:`agrif_create_restart_trc.exe` fails to store the variable ``TRBTRA``
(the Fraser River tracer ``B`` field, and the final variable)
in the file it produces.
To deal with that we duplicate the :kbd:`TRNTRA` field values as :kbd:`TRBTRA` and append that variable to the file:
To deal with that we duplicate the ``TRNTRA`` field values as ``TRBTRA`` and append that variable to the file:

.. code-block:: bash

Expand All @@ -351,7 +351,7 @@ To deal with that we duplicate the :kbd:`TRNTRA` field values as :kbd:`TRBTRA` a
$ ncrename -O -v TRNTRA,TRBTRA TRNTRA.nc TRBTRA.nc
$ ncks -4 -A TRBTRA.nc 1_SalishSea_02935440_restart_trc.nc

and upload the file to the appropriate run results directory on :kbd:`orcinus`:
and upload the file to the appropriate run results directory on :`s``:

.. code-block:: bash

Expand Down
Loading