Skip to content

Commit

Permalink
Using mkdocs to generate documentation (#55)
Browse files Browse the repository at this point in the history
  • Loading branch information
guilbaults authored May 13, 2024
1 parent 68dffae commit fca27b8
Show file tree
Hide file tree
Showing 16 changed files with 197 additions and 70 deletions.
30 changes: 30 additions & 0 deletions .github/workflows/mkdocs_deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
name: deploy documentation (only on push to main branch)
on:
push:
branches: main
# Declare default permissions as read only.
permissions: read-all
jobs:
build:
runs-on: ubuntu-22.04
permissions:
# Need to be able to write to the deploy branch
contents: write
steps:
- name: checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
with:
fetch-depth: 0 # need to fetch all history to ensure correct Git revision dates in docs

- name: set up Python
uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
with:
python-version: '3.10'

- name: install mkdocs + plugins
run: |
pip install mkdocs mkdocs-material
pip list | grep mkdocs
mkdocs --version
- name: build
run: mkdocs build --strict && mkdocs gh-deploy --force
30 changes: 30 additions & 0 deletions .github/workflows/mkdocs_test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
name: build documentation
on: [push, pull_request]
# Declare default permissions as read only.
permissions: read-all
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1

- name: set up Python
uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
with:
python-version: '3.10'

# - name: Markdown Linting Action
# uses: avto-dev/[email protected]
# with:
# rules: '/lint/rules/changelog.js'
# config: '/lint/config/changelog.yml'
# args: '.'

- name: install mkdocs + plugins
run: |
pip install mkdocs mkdocs-material
pip list | grep mkdocs
mkdocs --version
- name: build
run: mkdocs build --strict
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -146,3 +146,4 @@ public.cert
idp_metadata.xml

.DS_Store
.site/
52 changes: 4 additions & 48 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,15 @@

[![DOI](https://zenodo.org/badge/549763009.svg)](https://zenodo.org/badge/latestdoi/549763009)

This web portal is intended to give HPC users a view of the overall use of the HPC cluster and their use. This portal is using the information collected on compute nodes and management servers to produce the information in the various modules:
This web portal is intended to give HPC users a view of the overall use of the HPC cluster and their use. This portal uses the information collected on compute nodes and management servers to produce the information in the various modules:

* [jobstats](docs/jobstats.md)
* [accountstats](docs/accountstats.md)
* [cloudstats](docs/cloudstats.md)
* [quotas](docs/quotas.md)
* [top](docs/top.md)
* [usersummary](docs/usersummary.md)
[Documentation](docs/index.md)

Some examples of the available graphs are displayed in the documentation of each module.

This portal is made to be modular, some modules can be disabled if the data required is not needed or collected. Some modules have optional dependencies and will hide some graphs if the data is not available.

This portal also supports Openstack, the users can see their use without having to install a monitoring agent in their VM in their OpenStack VMs.
This portal also supports OpenStack, the users can see their use without having to install a monitoring agent in their VM in their OpenStack VMs.

Staff members can also see the use of any users to help them optimize their use of HPC and OpenStack clusters.

Expand All @@ -26,7 +21,7 @@ Some information collected is also available for the general public like the num
## Design
Performance metrics are stored in Prometheus, multiple exporters are used to gather this data, and most are optional.

The Django portal will also access various MySQL databases like the database of Slurm and Robinhood (if installed) to gather some information. Timeseries are stored with Prometheus for better performance. Compatible alternatives to Prometheus like Thanos, VictoriaMetrics, and Grafana Mimir should work without any problems (Thanos is used in production). Recorder rules in Prometheus are used to pre-aggregate some stats for the portal.
The Django portal will also access various MySQL databases like the database of Slurm and Robinhood (if installed) to gather some information. Time series are stored with Prometheus for better performance. Compatible alternatives to Prometheus like Thanos, VictoriaMetrics, and Grafana Mimir should work without any problems (Thanos is used in production). Recorder rules in Prometheus are used to pre-aggregate some stats for the portal.

![Architecture diagram](docs/userportal.png)

Expand All @@ -35,42 +30,3 @@ Various data sources are used to populate the content of this portal. Most of th
Some pre-aggregation is done using recorder rules in Prometheus. The required recorder rules are documented in the data sources documentation.

[Data sources documentation](docs/data.md)

## Test environment
A test environment using the local `uid` resolver and dummies allocations is provided to test the portal.

To use it, copy `example/local.py` to `userportal/local.py`. The other functions are documented in `common.py` if any other overrides are needed for your environment.

To quickly test and bypass authentication, add this line to `userportal/settings/99-local.py`. Other local configuration can be added in this file to override the default settings.

```
AUTHENTICATION_BACKENDS.insert(0, 'userportal.authentication.staffRemoteUserBackend')
```

This bypasses the authentication and will use the `REMOTE_USER` header or env variable to authenticate the user. This is useful to be able to try the portal without having to set up a full IDP environment. The REMOTE_USER method can be used when using some IDP such as Shibboleth. SAML2 based IDP is now the preferred authentication method for production.

Examine the default configuration in `userportal/settings/` and override any settings in `99-local.py` as needed.

Then you can launch the example server with:

```
[email protected] [email protected] python manage.py runserver
```

This will run the portal with the user `someuser` logged in as a staff member.

Automated Django tests are also available, they can be run with:

```
python manage.py test
```

This will test the various modules, including reading job data from the Slurm database and Prometheus. A temporary database for Django is created automatically for the tests. Slurm and Prometheus data are read directly from production data with a read-only account. A representative user, job and account need to be defined to be used in the tests, check the `90-tests.py` file for an example.

## Production install
The portal can be installed directly on a Centos7 or Rocky8 Apache web server or with Nginx and Gunicorn. The portal can also be deployed as a container with Podman or Kubernetes. Some scripts used to deploy both Nginx and Django containers inside the same pod are provided in the `podman` directory.
The various recommendations for any normal Django production deployment can be followed.

[Deploying Django](https://docs.djangoproject.com/en/3.2/howto/deployment/)

[Install documentation](docs/install.md)
4 changes: 3 additions & 1 deletion docs/accountstats.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
# Accountstats
The users can also see the aggregated use of the users in the same group. This also shows the current priority of this account in Slurm and a few months of history on how much computing resources were used.

<a href="accountstats.png"><img src="accountstats.png" alt="Stats per account" width="100"/></a>
## Screenshots
### Account stats
![Stats per account](accountstats.png)

## Requirements

Expand Down
10 changes: 7 additions & 3 deletions docs/cloudstats.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
# Cloudstats
The stats of the VM running on Openstack can be viewed. This is using the stats of libvirtd, no agent needs to be installed in the VM. There is an overall stats page available for staff. The page per project and VM is also available for the users.

<a href="cloudstats.png"><img src="cloudstats.png" alt="Overall use" width="100"/></a>
<a href="cloudstats_rpoject.png"><img src="cloudstats_project.png" alt="Use within a project" width="100"/></a>
<a href="cloudstats_vm.png"><img src="cloudstats_vm.png" alt="Use within a VM" width="100"/></a>
## Screenshots
### Overall use
![Overall use](cloudstats.png)
### Use within a project
![Use within a project](cloudstats_project.png)
### Use within a VM
![Use within a VM](cloudstats_vm.png)

## Requirements

Expand Down
8 changes: 4 additions & 4 deletions docs/data.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Data sources
Some features will not be available if the exporter required to gather the stats is not configured.
The main requirement to monitor a Slurm cluster is to install slurm-job-exporter and open a read-only access to the Slurm MySQL database. Other data sources in this page can be installed to gather more data.

## slurm-job-exporter
[slurm-job-exporter](https://github.com/guilbaults/slurm-job-exporter) is used to capture information from cgroups managed by Slurm on each compute node. This gathers CPU, memory, and GPU utilization.
Expand Down Expand Up @@ -47,12 +47,12 @@ groups:
expr: sum(label_replace(deriv(slurm_job_process_usage_total{}[1m]) > 0, "bin", "$1", "exe", ".*/(.*)")) by (cluster, account, bin)
```

## slurm-exporter
[slurm-exporter](https://github.com/guilbaults/prometheus-slurm-exporter/tree/osc) is used to capture stats from Slurm like the priority of each user. This portal is using a fork, branch `osc` in the linked repository. This fork support GPU reporting and sshare stats.

## Access to the database of slurmacct
This MySQL database is accessed by a read-only user. It does not need to be in the same database server where Django is storing its data.

## slurm-exporter
[slurm-exporter](https://github.com/guilbaults/prometheus-slurm-exporter/tree/osc) is used to capture stats from Slurm like the priority of each user. This portal is using a fork, branch `osc` in the linked repository. This fork support GPU reporting and sshare stats.

## lustre\_exporter and lustre\_exporter\_slurm
Those 2 exporters are used to gather information about Lustre usage.

Expand Down
29 changes: 29 additions & 0 deletions docs/development.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
A test and development environment using the local `uid` resolver and dummies allocations is provided to test the portal.

To use it, copy `example/local.py` to `userportal/local.py`. The other functions are documented in `common.py` if any other overrides are needed for your environment.

To quickly test and bypass authentication, add this line to `userportal/settings/99-local.py`. Other local configuration can be added in this file to override the default settings.

```
AUTHENTICATION_BACKENDS.insert(0, 'userportal.authentication.staffRemoteUserBackend')
```

This bypasses the authentication and will use the `REMOTE_USER` header or env variable to authenticate the user. This is useful to be able to try the portal without having to set up a full IDP environment. The REMOTE_USER method can be used when using some IDP such as Shibboleth. SAML2 based IDP is now the preferred authentication method for production.

Examine the default configuration in `userportal/settings/` and override any settings in `99-local.py` as needed.

Then you can launch the example server with:

```
[email protected] [email protected] python manage.py runserver
```

This will run the portal with the user `someuser` logged in as a staff member.

Automated Django tests are also available, they can be run with:

```
python manage.py test
```

This will test the various modules, including reading job data from the Slurm database and Prometheus. A temporary database for Django is created automatically for the tests. Slurm and Prometheus data are read directly from production data with a read-only account. A representative user, job and account need to be defined to be used in the tests, check the `90-tests.py` file for an example.
11 changes: 11 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# TrailblazingTurtle

# Introduction
TrailblazingTurtle is a web portal for HPC clusters. It is designed to be a single point of entry for users to access information about the cluster, their jobs, and the performance of the cluster. It is designed to be modular, so that it can be easily extended to support new features.

# Design
The Django portal will access various MySQL databases like the database of Slurm and Robinhood (if installed) to gather some information.

Time series are stored with Prometheus for better performance. Compatible alternatives to Prometheus like Thanos, VictoriaMetrics, and Grafana Mimir should work without any problems (Thanos is used in production). Recorder rules in Prometheus are used to pre-aggregate some stats for the portal.

![Architecture diagram](userportal.png)
9 changes: 9 additions & 0 deletions docs/install.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,12 @@
# Installation

Before installing in production, [a test environment should be set up to test the portal](development.md). This makes it easier to fully configure each module and modify as needed some functions like how the allocations are retrieved. Installing Prometheus and some exporters is also recommended to test the portal with real data.

The portal can be installed directly on a Rocky8 Apache web server or with Nginx and Gunicorn. The portal can also be deployed as a container with Podman or Kubernetes. Some scripts used to deploy both Nginx and Django containers inside the same pod are provided in the `podman` directory.
The various recommendations for any normal Django production deployment can be followed.

[Deploying Django](https://docs.djangoproject.com/en/3.2/howto/deployment/)

# Production without containers on Rocky Linux 8

RPMs required for production
Expand Down
7 changes: 5 additions & 2 deletions docs/jobstats.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@
# Jobstats
Each user can see their current uses on the cluster and a few hours in the past. The stats for each job are also available. Information about CPU, GPU, memory, filesystem, InfiniBand, power, etc. is also available per job. The submitted job script can also be collected from the Slurm server and then stored and displayed in the portal. Some automatic recommendations are also given to the user, based on the content of their job script and the stats of their job.

<a href="user.png"><img src="user.png" alt="Stats per user" width="100"/></a>
<a href="job.png"><img src="job.png" alt="Stats per job" width="100"/></a>
## Screenshots
### User stats
![Stats per user](user.png)
### Job stats
![Stats per job](job.png)

## Requirements
* Access to the database of Slurm
Expand Down
7 changes: 5 additions & 2 deletions docs/nodes.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@
# Nodes
This main page present the list of nodes in the cluster with a small graph representing the cores, memory and localdisk used. Each node has a link to a detailed page with more information about the node similar to the jobstats page.

<a href="nodes_list.png"><img src="nodes_list.png" alt="Nodes in the cluster with a small graph for each" width="100"/></a>
<a href="nodes_details.png"><img src="nodes_details.png" alt="Detailed stats for a node" width="100"/></a>
## Screenshots
### Nodes list
![Nodes in the cluster with a small trend graph for each](nodes_list.png)
### Node details
![Detailed stats for a node](nodes_details.png)

## Requirements
* Access to the database of Slurm
Expand Down
9 changes: 5 additions & 4 deletions docs/quotas.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
# Quotas
Each user can see their current storage allocations and who within their group is using the group quota.

<a href="quota.png"><img src="quota.png" alt="Quotas" width="100"/></a>
## Screenshots
### Quotas
![Quotas](quota.png)

Info about the HSM state (Tape) is also available.

<a href="hsm.png"><img src="hsm.png" alt="HSM" width="100"/></a>
### HSM
![HSM](hsm.png)

## Requirements
* Read-only access to the databases of Robinhood
16 changes: 12 additions & 4 deletions docs/top.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,18 @@ These pages are only available to staff and are meant to visualize poor cluster
* Jobs on large memory nodes (ranked by worst to best)
* Top users on Lustre

<a href="top_compute.png"><img src="top_compute.png" alt="Top compute user (CPU)" width="100"/></a>
<a href="top_compute_gpu.png"><img src="top_compute_gpu.png" alt="Top compute user(GPU)" width="100"/></a>
<a href="top_largemem.png"><img src="top_largemem.png" alt="Jobs on large memory nodes" width="100"/></a>
<a href="top_lustre.png"><img src="top_lustre.png" alt="Top users on Lustre" width="100"/></a>
## Screenshots
### Top compute user (CPU)
![Top compute user (CPU)](top_compute.png)

### Top compute user (GPU)
![Top compute user (GPU)](top_compute_gpu.png)

### Jobs on large memory nodes
![Jobs on large memory nodes](top_largemem.png)

### Top users on Lustre
![Top users on Lustre](top_lustre.png)

## Requirements
* Access to the database of Slurm
Expand Down
6 changes: 4 additions & 2 deletions docs/usersummary.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
# Usersummary
# User Summary
The usersummary page can be used for a quick diagnostic of a user to see their current quotas and last jobs.

<a href="usersummary.png"><img src="usersummary.png" alt="Quotas and jobs of a user" width="100"/></a>
## Screenshots
### Quotas and jobs of a user
![Quotas and jobs of a user](usersummary.png)

## Requirements
* Access to the database of Slurm
Expand Down
38 changes: 38 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
site_name: TrailblazingTurtle
repo_url: https://github.com/guilbaults/TrailblazingTurtle/
nav:
- 'Home': index.md
- 'Data collection': data.md
- 'Development': development.md
- 'Installation': install.md
- 'Modules':
- 'Job Stats': jobstats.md
- 'Top': top.md
- 'User Summary': usersummary.md
- 'Account Stats': accountstats.md
- 'Cloud Stats': cloudstats.md
- 'Nodes': nodes.md
- 'Quotas': quotas.md
- 'Quotas GPFS': quotasgpfs.md
- 'CF Access': cfaccess.md

theme:
name: material
# logo: img/logo.png
features:
# enable button to copy code blocks
- content.code.copy
plugins:
- search
markdown_extensions:
# allow for arbitrary nesting of code and content blocks
- pymdownx.superfences:
# syntax highlighting in code blocks and inline code
- pymdownx.highlight
# support for (collapsible) admonitions (notes, tips, etc.)
- admonition
- pymdownx.details
# icon + emoji
# - pymdownx.emoji:
# emoji_index: !!python/name:material.extensions.emoji.twemoji
# emoji_generator: !!python/name:material.extensions.emoji.to_svg

0 comments on commit fca27b8

Please sign in to comment.