Production hosting is managed by the Shields ops team:
Component | Subcomponent | People with access |
---|---|---|
shields-production-us | Account owner | @paulmelnikow |
shields-production-us | Full access | @calebcartwright, @chris48s, @paulmelnikow, @pyvesb |
shields-production-us | Access management | @calebcartwright, @chris48s, @paulmelnikow, @pyvesb |
Compose.io Redis | Account owner | @paulmelnikow |
Compose.io Redis | Account access | @paulmelnikow |
Compose.io Redis | Database connection credentials | @calebcartwright, @chris48s, @paulmelnikow, @pyvesb |
Zeit Now | Team owner | @paulmelnikow |
Zeit Now | Team members | @paulmelnikow, @chris48s, @calebcartwright, @platan |
Raster server | Full access as team members | @paulmelnikow, @chris48s, @calebcartwright, @platan |
shields-server.com redirector | Full access as team members | @paulmelnikow, @chris48s, @calebcartwright, @platan |
Legacy badge servers | Account owner | @espadrine |
Legacy badge servers | ssh, logs | @espadrine |
Legacy badge servers | Deployment | @espadrine, @paulmelnikow |
Legacy badge servers | Admin endpoints | @espadrine, @paulmelnikow |
Cloudflare (CDN) | Account owner | @espadrine |
Cloudflare (CDN) | Access management | @espadrine |
Cloudflare (CDN) | Admin access | @calebcartwright, @chris48s, @espadrine, @paulmelnikow, @PyvesB |
Twitch | OAuth app | @PyvesB |
Discord | OAuth app | @PyvesB |
YouTube | Account owner | @PyvesB |
OpenStreetMap (for Wheelmap) | Account owner | @paulmelnikow |
DNS | Account owner | @olivierlacan |
DNS | Read-only account access | @espadrine, @paulmelnikow, @chris48s |
Sentry | Error reports | @espadrine, @paulmelnikow |
Frontend | Deployment | Technically anyone with push access but in practice must be deployed with the badge server |
Metrics server | Owner | @platan |
UptimeRobot | Account owner | @paulmelnikow |
More metrics | Owner | @RedSparr0w |
Netlify (documentation site) | Owner | @chris48s |
There are too many bottlenecks!
Shields has mercifully little persistent state:
- The GitHub tokens we collect are saved on each server in a cloud Redis database. They can also be fetched from the GitHub auth admin endpoint for debugging.
- The server keeps a few caches in memory. These are neither persisted nor
inspectable.
- The request cache
- The regular-update cache
To bootstrap the configuration process, the script that starts the server sets a single environment variable:
NODE_CONFIG_ENV=shields-io-production
With that variable set, the server (using config
) reads these
files:
local-shields-io-production.yml
. This file contains secrets which are checked in with a deploy commit.shields-io-production.yml
. This file contains non-secrets which are checked in to the main repo.default.yml
. This file contains defaults.
The project ships with dotenv
, however there is no .env
in production.
Sitting in front of the three servers is a Cloudflare Free account which provides several services:
- Global CDN, caching, and SSL gateway for
img.shields.io
- Analytics through the Cloudflare dashboard
- DNS hosting for
shields.io
Cloudflare is configured to respect the servers' cache headers.
The frontend is served by GitHub Pages via the gh-pages branch. SSL is enforced.
shields.io
resolves to the GitHub Pages hosts. It is not proxied through
Cloudflare.
Technically any maintainer can push to gh-pages
, but in practice the frontend must be deployed
with the badge server via the deployment process described below.
The raster server raster.shields.io
(a.k.a. the rasterizing proxy) is
hosted on Zeit Now. It's managed in the
svg-to-image-proxy repo.
The deployment is done in two stages: the badge server (heroku) and the front-end (gh-pages).
After merging a commit to master, heroku should create a staging deploy. Check this has deployed correctly in the shields-staging
pipeline and review http://shields-staging.herokuapp.com/
If we're happy with it, "promote to production". This will deploy what's on staging to the shields-production-eu
and shields-production-us
pieplines.
To deploy the front-end to GH pages, use a clean clone of the shields repo.
$ git pull # update the working copy
$ npm ci # install dependencies (devDependencies are needed to build the frontend)
$ make deploy-gh-pages # build the frontend and push it to the gh-pages branch
No secrets are required to build or deploy the frontend.
DNS is registered with DNSimple.
Logs can be retrieved from heroku.
Error reporting is one of the most useful tools we have for monitoring
the server. It's generously donated by Sentry. We bundle
raven
into the application, and the Sentry DSN is configured via
local-shields-io-production.yml
(see documentation).
Overall server performance and requests by service are monitored using Prometheus and Grafana.
Request performance is monitored in two places:
- Status (using UptimeRobot)
- Server metrics using Prometheus and Grafana
- @RedSparr0w's monitor which posts notifications to a private #monitor chat room
There are three legacy servers on OVH VPS’s which are currently used for proxying.
Cname | Hostname | Type | IP | Location |
---|---|---|---|---|
s0.servers.shields.io | vps71670.vps.ovh.ca | VPS | 192.99.59.72 | Quebec, Canada |
s1.servers.shields.io | vps244529.ovh.net | VPS | 51.254.114.150 | Gravelines, France |
s2.servers.shields.io | vps117870.vps.ovh.ca | VPS | 149.56.96.133 | Quebec, Canada |
The only way to inspect the commit on the server is with git ls-remote
.