-
Notifications
You must be signed in to change notification settings - Fork 53
Database investigation tools
That is a bummer, but it'll be ok! You'll get through this. We've set up a few things to help you get some data about your queries to act on.
Please read the entirety of this doc and any comments in the linked file.
After reading, head on over to this file in your IDE to make the change needed for an investigation. Turn on the settings by following the instructions in each configuration block.
Postgresql log_min_duration_statement docs.
This is a straightforward logging option used for logging slow queries. It can be used to determine which queries are taking a long time; that may be enough information to work from.
To view these logs, head on over to the azure portal, visit the Postgresql Flexible Server service, select the database you want to view the logs of and select the Logs
blade. You can use the following query to filter the records. Feel free to add a timeframe and additional filters to fit your needs.
AzureDiagnostics
| where ResourceProvider =="MICROSOFT.DBFORPOSTGRESQL"
| where Category == "PostgreSQLLogs"
| where Message contains "duration:"
Once you've found a troublesome query, you could try running EXPLAIN ANALYZE
against that query to get even more information.
PostgreSQL pg_stat_statements docs.
pg_stat_statements
provides a means for tracking execution statistics of all SQL statements executed by a server. We can uncover which queries may be hogging memory, CPU, and I/O using this. We can also use it to find efficient queries that run often, adding load.
To get everything out of pg_stat_statements
, you may need to enable other configuration options to get all the data you want. An example of this is track_io_timing
. This needs to be enabled to get accurate read/write load data. PostgreSQL track_io_timing docs.
Here are some potentially helpful queries. (if you write or run a query that you've found to be helpful, please add it to the list!)
SELECT userid::regrole, dbid, query
FROM pg_stat_statements
ORDER BY (blk_read_time + blk_write_time) / calls DESC
LIMIT 10;
SELECT userid::regrole, dbid, mean_exec_time, query
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
SELECT userid::regrole, dbid, stddev_exec_time, query
FROM pg_stat_statements
ORDER BY stddev_exec_time DESC
LIMIT 10;
SELECT userid::regrole, dbid, query
FROM pg_stat_statements
ORDER BY (shared_blks_hit + shared_blks_dirtied) DESC
LIMIT 10;
auto_explain
allows you to log the analysis of queries that take longer than the threshold set. The power here is that it lets you see what a query does. If you suspect that a query plan may be unstable, this is a great option to explore.
To view these logs, head on over to the azure portal, visit the Postgresql Flexible Server service, select the database you want to view the logs of and select the Logs
blade. You can use the following query to filter the records. Feel free to add a timeframe and additional filters to fit your needs.
AzureDiagnostics
| where ResourceProvider =="MICROSOFT.DBFORPOSTGRESQL"
| where Category == "PostgreSQLLogs"
| where Message contains "plan:"
To do so, you can run the psql connection command with a -c
containing the query you want to run, then pipe that into a file for viewing or export.
psql "FILL_IN_DB_CONNECTION_DETAILS" -c "select userid::regrole, dbid, query from pg_stat_statements order by (blk_read_time+blk_write_time)/calls desc limit 10;" > /export/io_intensive_queries.txt
- Getting Started
- [Setup] Docker and docker compose development
- [Setup] IntelliJ run configurations
- [Setup] Running DB outside of Docker (optional)
- [Setup] Running nginx locally (optional)
- [Setup] Running outside of docker
- Accessing and testing weird parts of the app on local dev
- Accessing patient experience in local dev
- API Testing with Insomnia
- Cypress
- How to run e2e locally for development
- E2E tests
- Database maintenance
- MailHog
- Running tests
- SendGrid
- Setting up okta
- Sonar
- Storybook and Chromatic
- Twilio
- User roles
- Wiremock
- CSV Uploader
- Log local DB queries
- Code review and PR conventions
- SimpleReport Style Guide
- How to Review and Test Pull Requests for Dependabot
- How to Review and Test Pull Requests with Terraform Changes
- SimpleReport Deployment Process
- Adding a Developer
- Removing a developer
- Non-deterministic test tracker
- Alert Response - When You Know What is Wrong
- What to Do When You Have No Idea What is Wrong
- Main Branch Status
- Maintenance Mode
- Swapping Slots
- Monitoring
- Container Debugging
- Debugging the ReportStream Uploader
- Renew Azure Service Principal Credentials
- Releasing Changelog Locks
- Muting Alerts
- Architectural Decision Records
- Backend Stack Overview
- Frontend Overview
- Cloud Architecture
- Cloud Environments
- Database ERD
- External IDs
- GraphQL Flow
- Hibernate Lazy fetching and nested models
- Identity Verification (Experian)
- Spring Profile Management
- SR Result bulk uploader device validation logic
- Test Metadata and how we store it
- TestOrder vs TestEvent
- ReportStream Integration
- Feature Flag Setup
- FHIR Resources
- FHIR Conversions
- Okta E2E Integration
- Deploy Application Action
- Slack notifications for support escalations
- Creating a New Environment Within a Resource Group
- How to Add and Use Environment Variables in Azure
- Web Application Firewall (WAF) Troubleshooting and Maintenance
- How to Review and Test Pull Requests with Terraform Changes