-
Notifications
You must be signed in to change notification settings - Fork 30
Deployment
- Deployment stack is EC2/RDS/S3/Elasticache(Redis)
- Deployment config/scripts live in https://github.com/DemocracyClub/polling_deploy
- Static files are served from S3
- WDIV may be deployed across multiple EC2 front-ends
- Elasticache is used as shared storage for tracking API key usage
- Logs are at https://papertrailapp.com/groups/5099901/events
WhereDoIVote has a number of associated S3 buckets:
-
s3://ons-cache
- Copies of data we need from the ONS. We fetch Local Auth boundaries, ONSPD from here when we build a WDIV image. -
s3://pollingstations-assets2
- Production static assets -
s3://pollingstations-uploads
- Files uploaded by users are saved here -
s3://pollingstations-data
- Known good files are synced here - this is where we pick up data to import -
s3://pollingstations-packer-assets/
- Private bucket for things we need in the packer build that can't be public. This is where we pick up addressbase from when we build a WDIV image.
-
s3://pollingstations-uploads-dev
- Files uploaded by users are saved here in dev/staging -
s3://pollingstations-data-dev
- Known good files are synced here in dev/staging
For performance reasons, we use sharding in production on WDIV. Each front-end/NGINX server also hosts a Postgres instance with its own copy of the address/polling station/district data. This data is essentially read-only. Seperately, there is a shared RDS instance for read/write transactions. This DB connection is called "logger". DB routing logic is in https://github.com/DemocracyClub/UK-Polling-Stations/blob/master/polling_stations/db_routers.py
In order to ensure good performance on server init, we pre-warm critial DB tables on deploy. There is more explanation of this in https://github.com/DemocracyClub/polling_deploy/blob/master/files/init_db.sh The tradeoff of this is that it takes a new instance ~15 mins to become healthy which makes WDIV a bit cumbersome to deploy.