diff --git a/.github/workflows/mkdocs_test.yml b/.github/workflows/mkdocs_test.yml
new file mode 100644
index 0000000..7eeea86
--- /dev/null
+++ b/.github/workflows/mkdocs_test.yml
@@ -0,0 +1,30 @@
+name: build documentation
+on: [push, pull_request]
+# Declare default permissions as read only.
+permissions: read-all
+jobs:
+ build:
+ runs-on: ubuntu-22.04
+ steps:
+ - name: checkout
+ uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
+
+ - name: set up Python
+ uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
+ with:
+ python-version: '3.10'
+
+ # - name: Markdown Linting Action
+ # uses: avto-dev/markdown-lint@v1.2.0
+ # with:
+ # rules: '/lint/rules/changelog.js'
+ # config: '/lint/config/changelog.yml'
+ # args: '.'
+
+ - name: install mkdocs + plugins
+ run: |
+ pip install mkdocs mkdocs-material
+ pip list | grep mkdocs
+ mkdocs --version
+ - name: build
+ run: mkdocs build --strict
diff --git a/.gitignore b/.gitignore
index 2cc4ee8..5dcd8bb 100644
--- a/.gitignore
+++ b/.gitignore
@@ -146,3 +146,4 @@ public.cert
idp_metadata.xml
.DS_Store
+.site/
diff --git a/README.md b/README.md
index 3a1a61c..248383a 100644
--- a/README.md
+++ b/README.md
@@ -3,20 +3,15 @@
[![DOI](https://zenodo.org/badge/549763009.svg)](https://zenodo.org/badge/latestdoi/549763009)
-This web portal is intended to give HPC users a view of the overall use of the HPC cluster and their use. This portal is using the information collected on compute nodes and management servers to produce the information in the various modules:
+This web portal is intended to give HPC users a view of the overall use of the HPC cluster and their use. This portal uses the information collected on compute nodes and management servers to produce the information in the various modules:
-* [jobstats](docs/jobstats.md)
-* [accountstats](docs/accountstats.md)
-* [cloudstats](docs/cloudstats.md)
-* [quotas](docs/quotas.md)
-* [top](docs/top.md)
-* [usersummary](docs/usersummary.md)
+[Documentation](docs/index.md)
Some examples of the available graphs are displayed in the documentation of each module.
This portal is made to be modular, some modules can be disabled if the data required is not needed or collected. Some modules have optional dependencies and will hide some graphs if the data is not available.
-This portal also supports Openstack, the users can see their use without having to install a monitoring agent in their VM in their OpenStack VMs.
+This portal also supports OpenStack, the users can see their use without having to install a monitoring agent in their VM in their OpenStack VMs.
Staff members can also see the use of any users to help them optimize their use of HPC and OpenStack clusters.
@@ -26,7 +21,7 @@ Some information collected is also available for the general public like the num
## Design
Performance metrics are stored in Prometheus, multiple exporters are used to gather this data, and most are optional.
-The Django portal will also access various MySQL databases like the database of Slurm and Robinhood (if installed) to gather some information. Timeseries are stored with Prometheus for better performance. Compatible alternatives to Prometheus like Thanos, VictoriaMetrics, and Grafana Mimir should work without any problems (Thanos is used in production). Recorder rules in Prometheus are used to pre-aggregate some stats for the portal.
+The Django portal will also access various MySQL databases like the database of Slurm and Robinhood (if installed) to gather some information. Time series are stored with Prometheus for better performance. Compatible alternatives to Prometheus like Thanos, VictoriaMetrics, and Grafana Mimir should work without any problems (Thanos is used in production). Recorder rules in Prometheus are used to pre-aggregate some stats for the portal.
![Architecture diagram](docs/userportal.png)
@@ -35,42 +30,3 @@ Various data sources are used to populate the content of this portal. Most of th
Some pre-aggregation is done using recorder rules in Prometheus. The required recorder rules are documented in the data sources documentation.
[Data sources documentation](docs/data.md)
-
-## Test environment
-A test environment using the local `uid` resolver and dummies allocations is provided to test the portal.
-
-To use it, copy `example/local.py` to `userportal/local.py`. The other functions are documented in `common.py` if any other overrides are needed for your environment.
-
-To quickly test and bypass authentication, add this line to `userportal/settings/99-local.py`. Other local configuration can be added in this file to override the default settings.
-
-```
-AUTHENTICATION_BACKENDS.insert(0, 'userportal.authentication.staffRemoteUserBackend')
-```
-
-This bypasses the authentication and will use the `REMOTE_USER` header or env variable to authenticate the user. This is useful to be able to try the portal without having to set up a full IDP environment. The REMOTE_USER method can be used when using some IDP such as Shibboleth. SAML2 based IDP is now the preferred authentication method for production.
-
-Examine the default configuration in `userportal/settings/` and override any settings in `99-local.py` as needed.
-
-Then you can launch the example server with:
-
-```
-REMOTE_USER=someuser@alliancecan.ca affiliation=staff@alliancecan.ca python manage.py runserver
-```
-
-This will run the portal with the user `someuser` logged in as a staff member.
-
-Automated Django tests are also available, they can be run with:
-
-```
-python manage.py test
-```
-
-This will test the various modules, including reading job data from the Slurm database and Prometheus. A temporary database for Django is created automatically for the tests. Slurm and Prometheus data are read directly from production data with a read-only account. A representative user, job and account need to be defined to be used in the tests, check the `90-tests.py` file for an example.
-
-## Production install
-The portal can be installed directly on a Centos7 or Rocky8 Apache web server or with Nginx and Gunicorn. The portal can also be deployed as a container with Podman or Kubernetes. Some scripts used to deploy both Nginx and Django containers inside the same pod are provided in the `podman` directory.
-The various recommendations for any normal Django production deployment can be followed.
-
-[Deploying Django](https://docs.djangoproject.com/en/3.2/howto/deployment/)
-
-[Install documentation](docs/install.md)
\ No newline at end of file
diff --git a/docs/accountstats.md b/docs/accountstats.md
index 29377e9..07483f7 100644
--- a/docs/accountstats.md
+++ b/docs/accountstats.md
@@ -1,7 +1,9 @@
# Accountstats
The users can also see the aggregated use of the users in the same group. This also shows the current priority of this account in Slurm and a few months of history on how much computing resources were used.
-
+## Screenshots
+### Account stats
+![Stats per account](accountstats.png)
## Requirements
diff --git a/docs/cloudstats.md b/docs/cloudstats.md
index 568e68c..eefe90c 100644
--- a/docs/cloudstats.md
+++ b/docs/cloudstats.md
@@ -1,9 +1,13 @@
# Cloudstats
The stats of the VM running on Openstack can be viewed. This is using the stats of libvirtd, no agent needs to be installed in the VM. There is an overall stats page available for staff. The page per project and VM is also available for the users.
-
-
-
+## Screenshots
+### Overall use
+![Overall use](cloudstats.png)
+### Use within a project
+![Use within a project](cloudstats_project.png)
+### Use within a VM
+![Use within a VM](cloudstats_vm.png)
## Requirements
diff --git a/docs/data.md b/docs/data.md
index 24cbed3..8ed610c 100644
--- a/docs/data.md
+++ b/docs/data.md
@@ -1,5 +1,5 @@
# Data sources
-Some features will not be available if the exporter required to gather the stats is not configured.
+The main requirement to monitor a Slurm cluster is to install slurm-job-exporter and open a read-only access to the Slurm MySQL database. Other data sources in this page can be installed to gather more data.
## slurm-job-exporter
[slurm-job-exporter](https://github.com/guilbaults/slurm-job-exporter) is used to capture information from cgroups managed by Slurm on each compute node. This gathers CPU, memory, and GPU utilization.
@@ -47,12 +47,12 @@ groups:
expr: sum(label_replace(deriv(slurm_job_process_usage_total{}[1m]) > 0, "bin", "$1", "exe", ".*/(.*)")) by (cluster, account, bin)
```
-## slurm-exporter
-[slurm-exporter](https://github.com/guilbaults/prometheus-slurm-exporter/tree/osc) is used to capture stats from Slurm like the priority of each user. This portal is using a fork, branch `osc` in the linked repository. This fork support GPU reporting and sshare stats.
-
## Access to the database of slurmacct
This MySQL database is accessed by a read-only user. It does not need to be in the same database server where Django is storing its data.
+## slurm-exporter
+[slurm-exporter](https://github.com/guilbaults/prometheus-slurm-exporter/tree/osc) is used to capture stats from Slurm like the priority of each user. This portal is using a fork, branch `osc` in the linked repository. This fork support GPU reporting and sshare stats.
+
## lustre\_exporter and lustre\_exporter\_slurm
Those 2 exporters are used to gather information about Lustre usage.
diff --git a/docs/development.md b/docs/development.md
new file mode 100644
index 0000000..a55c5e0
--- /dev/null
+++ b/docs/development.md
@@ -0,0 +1,29 @@
+A test and developpement environment using the local `uid` resolver and dummies allocations is provided to test the portal.
+
+To use it, copy `example/local.py` to `userportal/local.py`. The other functions are documented in `common.py` if any other overrides are needed for your environment.
+
+To quickly test and bypass authentication, add this line to `userportal/settings/99-local.py`. Other local configuration can be added in this file to override the default settings.
+
+```
+AUTHENTICATION_BACKENDS.insert(0, 'userportal.authentication.staffRemoteUserBackend')
+```
+
+This bypasses the authentication and will use the `REMOTE_USER` header or env variable to authenticate the user. This is useful to be able to try the portal without having to set up a full IDP environment. The REMOTE_USER method can be used when using some IDP such as Shibboleth. SAML2 based IDP is now the preferred authentication method for production.
+
+Examine the default configuration in `userportal/settings/` and override any settings in `99-local.py` as needed.
+
+Then you can launch the example server with:
+
+```
+REMOTE_USER=someuser@alliancecan.ca affiliation=staff@alliancecan.ca python manage.py runserver
+```
+
+This will run the portal with the user `someuser` logged in as a staff member.
+
+Automated Django tests are also available, they can be run with:
+
+```
+python manage.py test
+```
+
+This will test the various modules, including reading job data from the Slurm database and Prometheus. A temporary database for Django is created automatically for the tests. Slurm and Prometheus data are read directly from production data with a read-only account. A representative user, job and account need to be defined to be used in the tests, check the `90-tests.py` file for an example.
\ No newline at end of file
diff --git a/docs/index.md b/docs/index.md
new file mode 100644
index 0000000..62454ae
--- /dev/null
+++ b/docs/index.md
@@ -0,0 +1,16 @@
+# TrailblazingTurtle
+
+# Introduction
+TrailblazingTurtle is a web portal for HPC clusters. It is designed to be a single point of entry for users to access information about the cluster, their jobs, and the performance of the cluster. It is designed to be modular, so that it can be easily extended to support new features.
+
+# Design
+The Django portal will access various MySQL databases like the database of Slurm and Robinhood (if installed) to gather some information.
+
+Time series are stored with Prometheus for better performance. Compatible alternatives to Prometheus like Thanos, VictoriaMetrics, and Grafana Mimir should work without any problems (Thanos is used in production). Recorder rules in Prometheus are used to pre-aggregate some stats for the portal.
+
+![Architecture diagram](userportal.png)
+
+* [Data collection](data_collection.md)
+* [Development](development.md)
+* [Installation](installation.md)
+* [Modules](modules.md)
diff --git a/docs/install.md b/docs/install.md
index 75bdbe9..4363a9c 100644
--- a/docs/install.md
+++ b/docs/install.md
@@ -1,3 +1,12 @@
+# Installation
+
+Before installing in production, [a test environment should be set up to test the portal](development.md). This makes it easier to fully configure each module and modify as needed some functions like how the allocations are retrieved. Installing Prometheus and some exporters is also recommended to test the portal with real data.
+
+The portal can be installed directly on a Rocky8 Apache web server or with Nginx and Gunicorn. The portal can also be deployed as a container with Podman or Kubernetes. Some scripts used to deploy both Nginx and Django containers inside the same pod are provided in the `podman` directory.
+The various recommendations for any normal Django production deployment can be followed.
+
+[Deploying Django](https://docs.djangoproject.com/en/3.2/howto/deployment/)
+
# Production without containers on Rocky Linux 8
RPMs required for production
diff --git a/docs/jobstats.md b/docs/jobstats.md
index 819de85..b6d0b36 100644
--- a/docs/jobstats.md
+++ b/docs/jobstats.md
@@ -1,8 +1,11 @@
# Jobstats
Each user can see their current uses on the cluster and a few hours in the past. The stats for each job are also available. Information about CPU, GPU, memory, filesystem, InfiniBand, power, etc. is also available per job. The submitted job script can also be collected from the Slurm server and then stored and displayed in the portal. Some automatic recommendations are also given to the user, based on the content of their job script and the stats of their job.
-
-
+## Screenshots
+### User stats
+![Stats per user](user.png)
+### Job stats
+![Stats per job](job.png)
## Requirements
* Access to the database of Slurm
diff --git a/docs/nodes.md b/docs/nodes.md
index 9404515..765403d 100644
--- a/docs/nodes.md
+++ b/docs/nodes.md
@@ -1,8 +1,11 @@
# Nodes
This main page present the list of nodes in the cluster with a small graph representing the cores, memory and localdisk used. Each node has a link to a detailed page with more information about the node similar to the jobstats page.
-
-
+## Screenshots
+### Nodes list
+![Nodes in the cluster with a small trend graph for each](nodes_list.png)
+### Node details
+![Detailed stats for a node](nodes_details.png)
## Requirements
* Access to the database of Slurm
diff --git a/docs/quotas.md b/docs/quotas.md
index 6c8d0a9..a772160 100644
--- a/docs/quotas.md
+++ b/docs/quotas.md
@@ -1,11 +1,12 @@
# Quotas
Each user can see their current storage allocations and who within their group is using the group quota.
-
+## Screenshots
+### Quotas
+![Quotas](quota.png)
-Info about the HSM state (Tape) is also available.
-
-
+### HSM
+![HSM](hsm.png)
## Requirements
* Read-only access to the databases of Robinhood
diff --git a/docs/top.md b/docs/top.md
index 0fe6a15..335982a 100644
--- a/docs/top.md
+++ b/docs/top.md
@@ -5,10 +5,18 @@ These pages are only available to staff and are meant to visualize poor cluster
* Jobs on large memory nodes (ranked by worst to best)
* Top users on Lustre
-
-
-
-
+## Screenshots
+### Top compute user (CPU)
+![Top compute user (CPU)](top_compute.png)
+
+### Top compute user (GPU)
+![Top compute user (GPU)](top_compute_gpu.png)
+
+### Jobs on large memory nodes
+![Jobs on large memory nodes](top_largemem.png)
+
+### Top users on Lustre
+![Top users on Lustre](top_lustre.png)
## Requirements
* Access to the database of Slurm
diff --git a/docs/usersummary.md b/docs/usersummary.md
index a318357..97aafbf 100644
--- a/docs/usersummary.md
+++ b/docs/usersummary.md
@@ -1,7 +1,9 @@
-# Usersummary
+# User Summary
The usersummary page can be used for a quick diagnostic of a user to see their current quotas and last jobs.
-
+## Screenshots
+### Quotas and jobs of a user
+![Quotas and jobs of a user](usersummary.png)
## Requirements
* Access to the database of Slurm
diff --git a/mkdocs.yml b/mkdocs.yml
new file mode 100644
index 0000000..ca403df
--- /dev/null
+++ b/mkdocs.yml
@@ -0,0 +1,37 @@
+site_name: TrailblazingTurtle
+nav:
+ - 'Home': index.md
+ - 'Data collection': data.md
+ - 'Development': development.md
+ - 'Installation': install.md
+ - 'Modules':
+ - 'Job Stats': jobstats.md
+ - 'Top': top.md
+ - 'User Summary': usersummary.md
+ - 'Account Stats': accountstats.md
+ - 'Cloud Stats': cloudstats.md
+ - 'Nodes': nodes.md
+ - 'Quotas': quotas.md
+ - 'Quotas GPFS': quotasgpfs.md
+ - 'CF Access': cfaccess.md
+
+theme:
+ name: material
+ # logo: img/logo.png
+ features:
+ # enable button to copy code blocks
+ - content.code.copy
+plugins:
+ - search
+markdown_extensions:
+ # allow for arbitrary nesting of code and content blocks
+ - pymdownx.superfences:
+ # syntax highlighting in code blocks and inline code
+ - pymdownx.highlight
+ # support for (collapsible) admonitions (notes, tips, etc.)
+ - admonition
+ - pymdownx.details
+ # icon + emoji
+ # - pymdownx.emoji:
+ # emoji_index: !!python/name:material.extensions.emoji.twemoji
+ # emoji_generator: !!python/name:material.extensions.emoji.to_svg