diff --git a/README.md b/README.md index 32b06623..cc4df09c 100644 --- a/README.md +++ b/README.md @@ -14,29 +14,60 @@ ## Overview -Supavisor is a scalable, cloud-native Postgres connection pooler. A Supavisor cluster is capable of proxying millions of Postgres end-client connections into a stateful pool of native Postgres database connections. +Supavisor is a scalable, cloud-native Postgres connection pooler. A Supavisor +cluster is capable of proxying millions of Postgres end-client connections into +a stateful pool of native Postgres database connections. -For database managers, Supavisor simplifies the task of managing Postgres clusters by providing easy configuration of highly available Postgres clusters ([todo](#future-work)). +For database managers, Supavisor simplifies the task of managing Postgres +clusters by providing easy configuration of highly available Postgres clusters +([todo](#future-work)). ## Motivation We have several goals with Supavisor: -- **Zero-downtime scaling**: we want to scale Postgres server compute with zero-downtime. To do this, we need an external Pooler that can buffer and re-route requests while the resizing operation is in progress. -- **Handling modern connection demands**: We need a Pooler that can absorb millions of connections. We often see developers connecting to Postgres from Serverless environments, and so we also need something that works with both TCP and HTTP protocols. -- **Efficiency**: Our customers pay for database processing power, and our goal is to maximize their database capacity. While PgBouncer is resource-efficient, it still consumes some resources on the database instance. By moving connection pooling to a dedicated cluster adjacent to tenant databases, we can free up additional resources to better serve customer queries. +- **Zero-downtime scaling**: we want to scale Postgres server compute with + zero-downtime. To do this, we need an external Pooler that can buffer and + re-route requests while the resizing operation is in progress. +- **Handling modern connection demands**: We need a Pooler that can absorb + millions of connections. We often see developers connecting to Postgres from + Serverless environments, and so we also need something that works with both TCP + and HTTP protocols. +- **Efficiency**: Our customers pay for database processing power, and our goal + is to maximize their database capacity. While PgBouncer is resource-efficient, + it still consumes some resources on the database instance. By moving connection + pooling to a dedicated cluster adjacent to tenant databases, we can free up + additional resources to better serve customer queries. ## Architecture -Supavisor was designed to work in a cloud computing environment as a highly available cluster of nodes. Tenant configuration is stored in a highly available Postgres database. Configuration is loaded from the Supavisor database when a tenant connection pool is initiated. - -Connection pools are dynamic. When a tenant client connects to the Supavisor cluster the tenant pool is started and all connections to the tenant database are established. The process ID of the new tenant pool is then distributed to all nodes of the cluster and stored in an in-memory key-value store. Subsequent tenant client connections live on the inbound node but connection data is proxied from the pool node to the client connection node as needed. - -Because the count of Postgres connections is constrained only one tenant connection pool should be alive in a Supavisor cluster. In the case of two simultaneous client connections starting a pool, as the pool process IDs are distributed across the cluster, eventually one of those pools is gracefully shutdown. - -The dynamic nature of tenant database connection pools enables high availability in the event of node outages. Pool processes are monitored by each node. If a node goes down that process ID is removed from the cluster. Tenant clients will then start a new pool automatically as they reconnect to the cluster. - -This design enables blue-green or rolling deployments as upgrades require. A single VPC / multiple availability zone topologies is possible and can provide for greater redundancy when load balancing queries across read replicas are supported ([todo](#future-work)). +Supavisor was designed to work in a cloud computing environment as a highly +available cluster of nodes. Tenant configuration is stored in a highly available +Postgres database. Configuration is loaded from the Supavisor database when a +tenant connection pool is initiated. + +Connection pools are dynamic. When a tenant client connects to the Supavisor +cluster the tenant pool is started and all connections to the tenant database +are established. The process ID of the new tenant pool is then distributed to +all nodes of the cluster and stored in an in-memory key-value store. Subsequent +tenant client connections live on the inbound node but connection data is +proxied from the pool node to the client connection node as needed. + +Because the count of Postgres connections is constrained only one tenant +connection pool should be alive in a Supavisor cluster. In the case of two +simultaneous client connections starting a pool, as the pool process IDs are +distributed across the cluster, eventually one of those pools is gracefully +shutdown. + +The dynamic nature of tenant database connection pools enables high availability +in the event of node outages. Pool processes are monitored by each node. If a +node goes down that process ID is removed from the cluster. Tenant clients will +then start a new pool automatically as they reconnect to the cluster. + +This design enables blue-green or rolling deployments as upgrades require. A +single VPC / multiple availability zone topologies is possible and can provide +for greater redundancy when load balancing queries across read replicas are +supported ([todo](#future-work)).
@@ -68,13 +99,15 @@ This design enables blue-green or rolling deployments as upgrades require. A sin - NOT run in a serverless environment - NOT dependant on Kubernetes - Observable - - Easily understand throughput by tenant, tenant database or individual connection + - Easily understand throughput by tenant, tenant database or individual + connection - Prometheus `/metrics` endpoint - Manageable - OpenAPI spec at `/api/openapi` - SwaggerUI at `/swaggerui` - Highly available - - When deployed as a Supavisor cluster and a node dies connection pools should be quickly spun up or already available on other nodes when clients reconnect + - When deployed as a Supavisor cluster and a node dies connection pools should + be quickly spun up or already available on other nodes when clients reconnect - Connection buffering - Brief connection buffering for transparent database restarts or failovers @@ -82,9 +115,11 @@ This design enables blue-green or rolling deployments as upgrades require. A sin - Load balancing - Queries can be load balanced across read-replicas - - Load balancing is independant of Postgres high-availability management (see below) + - Load balancing is independent of Postgres high-availability management (see + below) - Query caching - - Query results are optionally cached in the pool cluster and returned before hitting the tenant database + - Query results are optionally cached in the pool cluster and returned before + hitting the tenant database - Session pooling - Like `PgBouncer` - Multi-protocol Postgres query interface @@ -96,8 +131,9 @@ This design enables blue-green or rolling deployments as upgrades require. A sin - Health checks - Push button read-replica configuration - Config as code - - Not only for the supavisor cluster but tenant databases and tenant database clusters as well - - Pulumi / terraform support + - Not only for the Supavisor cluster but tenant databases and tenant database + clusters as well + - Pulumi / Terraform support ## Benchmarks @@ -152,16 +188,17 @@ tps = 189.228103 (without initial connection time) - Supavisor two node cluster - 64vCPU / 246RAM - Ubuntu 22.04.2 aarch64 -- 1_003_200 concurrent client connection -- 20_000+ QPS +- 1 003 200 concurrent client connection +- 20 000+ QPS - 400 tenant Postgres connection -- `select * from (values (1, 'one'), (2, 'two'), (3, 'three')) as t (num,letter);` +- `SELECT * FROM (VALUES (1, 'one'), (2, 'two'), (3, 'three')) AS t (num, letter);` - ~50% CPU utilization (pool owner node) - 7.8G RAM usage ## Acknowledgements -[José Valim](https://github.com/josevalim) and the [Dashbit](https://dashbit.co/) team were incredibly helpful in informing the design decisions for Supavisor. +[José Valim](https://github.com/josevalim) and the [Dashbit](https://dashbit.co/) team were incredibly helpful in informing +the design decisions for Supavisor. ## Inspiration diff --git a/VERSION b/VERSION index c3e7007a..0516ac10 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -1.1.64 +1.1.65 diff --git a/docs/configuration/pool_modes.md b/docs/configuration/pool_modes.md index a7a9255c..f5782f6c 100644 --- a/docs/configuration/pool_modes.md +++ b/docs/configuration/pool_modes.md @@ -1,4 +1,5 @@ -Configure the `mode_type` on the `user` to set how Supavisor connection pools will behave. +Configure the `mode_type` on the `user` to set how Supavisor connection pools +will behave. The `mode_type` can be one of: @@ -8,11 +9,13 @@ The `mode_type` can be one of: ## Transaction Mode -`transaction` mode assigns a connection to a client for the duration of a single transaction. +`transaction` mode assigns a connection to a client for the duration of a single +transaction. ## Session Mode -`session` mode assigns a connection to a client for the duration of the client connection. +`session` mode assigns a connection to a client for the duration of the client +connection. ## Native Mode diff --git a/docs/configuration/tenants.md b/docs/configuration/tenants.md index 2ef6a175..595650cc 100644 --- a/docs/configuration/tenants.md +++ b/docs/configuration/tenants.md @@ -1,8 +1,11 @@ -All configuration options for a tenant are stored on the `tenant` record in the metadata database used by Supavisor. +All configuration options for a tenant are stored on the `tenant` record in the +metadata database used by Supavisor. -A `tenant` is looked via the `external_id` discovered in the incoming client connection. +A `tenant` is looked via the `external_id` discovered in the incoming client +connection. -All `tenant` fields and their types are defined in the `Supavisor.Tenants.Tenant` module. +All `tenant` fields and their types are defined in the +`Supavisor.Tenants.Tenant` module. ## Field Descriptions @@ -22,13 +25,16 @@ All `tenant` fields and their types are defined in the `Supavisor.Tenants.Tenant `upstream_verify` - how to verify the ssl certificate -`upstream_tls_ca` - the ca certificate to use when connecting to the database server +`upstream_tls_ca` - the ca certificate to use when connecting to the database +server `enforce_ssl` - enforce an SSL connection on client connections -`require_user` - require client connection credentials to match `user` credentials in the metadata database +`require_user` - require client connection credentials to match `user` +credentials in the metadata database -`auth_query` - the query to use when matching credential agains a client connection +`auth_query` - the query to use when matching credential agains a client +connection `default_pool_size` - the default size of the database pool diff --git a/docs/configuration/users.md b/docs/configuration/users.md index dad3d37b..850bbb86 100644 --- a/docs/configuration/users.md +++ b/docs/configuration/users.md @@ -1,6 +1,8 @@ -All configuration options for a tenant `user` are stored on the `user` record in the metadata database used by Supavisor. +All configuration options for a tenant `user` are stored on the `user` record in +the metadata database used by Supavisor. -All `user` fields and their types are defined in the `Supavisor.Tenants.User` module. +All `user` fields and their types are defined in the `Supavisor.Tenants.User` +module. ## Field Descriptions @@ -10,12 +12,15 @@ All `user` fields and their types are defined in the `Supavisor.Tenants.User` mo `db_user_alias` - client connection user will also match this user record -`is_manager` - these credentials are used to perform management queries against the tenant database +`is_manager` - these credentials are used to perform management queries against +the tenant database `mode_type` - the pool mode type -`pool_size` - the database connection pool size used to override `default_pool_size` on the `tenant` +`pool_size` - the database connection pool size used to override +`default_pool_size` on the `tenant` -`pool_checkout_timeout` - the maximum duration allowed for a client connection to checkout a database connection from the pool +`pool_checkout_timeout` - the maximum duration allowed for a client connection +to checkout a database connection from the pool `max_clients` - the maximum amount of client connections allowed for this user diff --git a/docs/connecting/authentication.md b/docs/connecting/authentication.md index 14f219fa..24c1cb0f 100644 --- a/docs/connecting/authentication.md +++ b/docs/connecting/authentication.md @@ -1,16 +1,20 @@ -When a client connection is established Supavisor needs to verify the credentials of the connection. +When a client connection is established Supavisor needs to verify the +credentials of the connection. Credential verificiation is done either via `user` records or an `auth_query`. ## Tenant User Record -If no `auth_query` exists on the `tenant` record credentials will be looked up from a `user` and verified against the client connection string credentials. +If no `auth_query` exists on the `tenant` record credentials will be looked up +from a `user` and verified against the client connection string credentials. There must be one or more `user` records for a `tenant` where `is_manager` is `false`. ## Authentication Query -If the `user` in the client connection is not found for a `tenant` it will use the `user` where `is_manager` is `true` and the `auth_query` on the `tenant` to return matching credentials from the tenant database. +If the `user` in the client connection is not found for a `tenant` it will use +the `user` where `is_manager` is `true` and the `auth_query` on the `tenant` to +return matching credentials from the tenant database. A simple `auth_query` can be: @@ -43,7 +47,8 @@ REVOKE ALL ON FUNCTION supavisor.get_auth(p_usename TEXT) FROM PUBLIC; GRANT EXECUTE ON FUNCTION supavisor.get_auth(p_usename TEXT) TO supavisor; ``` -Update the `auth_query` on the `tenant` and it will use this query to match against client connection credentials. +Update the `auth_query` on the `tenant` and it will use this query to match +against client connection credentials. ```sql SELECT * FROM supavisor.get_auth($1) diff --git a/docs/connecting/overview.md b/docs/connecting/overview.md index dacb21ca..3a684812 100644 --- a/docs/connecting/overview.md +++ b/docs/connecting/overview.md @@ -1,6 +1,8 @@ -To connect to a tenant database Supavisor needs to look up the tenant with an `external_id`. +To connect to a tenant database Supavisor needs to look up the tenant with an +`external_id`. -You can connect to Supavisor just like you connect to Postgres except we need to include the `external_id` in the connection string. +You can connect to Supavisor just like you connect to Postgres except we need to +include the `external_id` in the connection string. Supavisor parses the `external_id` from a connection in one three ways: @@ -14,7 +16,8 @@ Supavisor parses the `external_id` from a connection in one three ways: ## Username -Include the `external_id` in the username. The `external_id` is found after the `.` in the username: +Include the `external_id` in the username. The `external_id` is found after +the `.` (dot) in the username: ``` psql postgresql://postgres.dev_tenant:postgres@localhost:6543/postgres diff --git a/docs/deployment/fly.md b/docs/deployment/fly.md index 668c4b04..eb7512b0 100644 --- a/docs/deployment/fly.md +++ b/docs/deployment/fly.md @@ -6,15 +6,18 @@ Type the following command in your terminal: fly launch ``` -Choose a name for your app when prompted, then answer "yes" to the following question: +Choose a name for your app when prompted, then answer "yes" to the following +question: ```bash Would you like to copy its configuration to the new app? (y/N) ``` -Next, select an organization and choose a region. You don't need to deploy the app yet. +Next, select an organization and choose a region. You don't need to deploy the +app yet. -Since the pooler uses an additional port (7654) for the PostgreSQL protocol, you need to reserve an IP address: +Since the pooler uses an additional port (7654) for the PostgreSQL protocol, you +need to reserve an IP address: ```bash fly ips allocate-v4 diff --git a/docs/development/docs.md b/docs/development/docs.md index a3c665b4..28d7b0e1 100644 --- a/docs/development/docs.md +++ b/docs/development/docs.md @@ -14,6 +14,6 @@ Build and serve the documentation locally with: `mkdocs serve` -Production documentation is built on merge into `main` with the Github Action: +Production documentation is built on merge into `main` with the GitHub Action: `/.github/workflows/docs.yml` diff --git a/docs/development/installation.md b/docs/development/installation.md index 0c773a1b..a4d04b69 100644 --- a/docs/development/installation.md +++ b/docs/development/installation.md @@ -1,10 +1,14 @@ -Before starting, set up the database where Supavisor will store tenants' data. The following command will pull a Docker image with PostgreSQL 14 and run it on port 6432: +Before starting, set up the database where Supavisor will store tenants' data. +The following command will pull a Docker image with PostgreSQL 14 and run it on +port 6432: ``` docker-compose -f ./docker-compose.db.yml up ``` -> `Supavisor` stores tables in the `supavisor` schema. The schema should be automatically created by the `dev/postgres/00-setup.sql` file. If you encounter issues with migrations, ensure that this schema exists. +> `Supavisor` stores tables in the `supavisor` schema. The schema should be +> automatically created by the `dev/postgres/00-setup.sql` file. If you +> encounter issues with migrations, ensure that this schema exists. Next, get dependencies and apply migrations: diff --git a/docs/development/setup.md b/docs/development/setup.md index 20819493..e7ed2d42 100644 --- a/docs/development/setup.md +++ b/docs/development/setup.md @@ -10,12 +10,12 @@ Start the Supavisor database to store tenant information: make db_start && make db_migrate ``` -You need to add tenants to the database. For example, the following request will add the `dev_tenant` with credentials to the database set up earlier. - +You need to add tenants to the database. For example, the following request will +add the `dev_tenant` with credentials to the database set up earlier. ```bash curl -X PUT \ - 'http://localhost:4000/api/tenants/dev_tenant \ + 'http://localhost:4000/api/tenants/dev_tenant' \ --header 'Accept: */*' \ --header 'User-Agent: Thunder Client (https://www.thunderclient.com)' \ --header 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJvbGUiOiJhbm9uIiwiaWF0IjoxNjQ1MTkyODI0LCJleHAiOjE5NjA3Njg4MjR9.M9jrxyvPLkUxWgOYSf5dNdJ8v_eRrq810ShFRT8N-6M' \ @@ -25,36 +25,42 @@ curl -X PUT \ "db_host": "localhost", "db_port": 6432, "db_database": "postgres", - "ip_version": "auto", - "enforce_ssl": false, - "require_user": false, - "auth_query": "SELECT rolname, rolpassword FROM pg_authid WHERE rolname=$1;", + "ip_version": "auto", + "enforce_ssl": false, + "require_user": false, + "auth_query": "SELECT rolname, rolpassword FROM pg_authid WHERE rolname=$1;", "users": [ { "db_user": "postgres", "db_password": "postgres", "pool_size": 20, - "mode_type": "transaction", - "is_manager": true + "mode_type": "transaction", + "is_manager": true } ] } }' ``` -Now, it's possible to connect through the proxy. By default, Supavisor uses port `6543` for transaction mode and `5432` for session mode: +Now, it's possible to connect through the proxy. By default, Supavisor uses port +`6543` for transaction mode and `5432` for session mode: ``` psql postgresql://postgres.dev_tenant:postgres@localhost:6543/postgres ``` -> :warning: The tenant's ID is incorporated into the username and separated by the `.` symbol. For instance, for the username `some_username` belonging to the tenant `some_tenant`, the modified username will be `some_username.some_tenant`. This approach enables the system to support multi-tenancy on a single IP address. +> :warning: The tenant's ID is incorporated into the username and separated by +> the `.` symbol. For instance, for the username `some_username` belonging to +> the tenant `some_tenant`, the modified username will be +> `some_username.some_tenant`. This approach enables the system to support +> multi-tenancy on a single IP address. -As a general note, if you are not using the `Makefile` you will have to set a `VAULT_ENC_KEY` which should be at least 32 bytes long. +As a general note, if you are not using the `Makefile` you will have to set a +`VAULT_ENC_KEY` which should be at least 32 bytes long. ## General Commands -Here's an overview of the commands and the options you can use. +Here's an overview of the commands and the options you can use. ### Add/update tenant diff --git a/docs/faq.md b/docs/faq.md index a39b0903..720b39e4 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -2,20 +2,39 @@ Answers to frequently asked questions. ## What happens when I hit my connection limit? -The connection (or client) limit is set by the `default_max_clients` on the `tenant` record or `max_clients` on the `user`. +The connection (or client) limit is set by the `default_max_clients` on the +`tenant` record or `max_clients` on the `user`. -Say your connection limit is 1000. When you try to connect client number 1001 this client will receive the error `Max client connections reached` which will be returned as a Postgres error to your client in the wire protocol and subsequently should show up in your exception monitoring software. +Say your connection limit is 1000. When you try to connect client number 1001 +this client will receive the error `Max client connections reached` which will +be returned as a Postgres error to your client in the wire protocol and +subsequently should show up in your exception monitoring software. ## Does Supavisor support prepared statements? -As of 1.0 Supavisor supports prepared statements. Supavisor will detect `prepare` statements and issue those to all database connections. All clients will then be able to address those prepared statements by name when issuing `execute` statements. +As of 1.0 Supavisor supports prepared statements. Supavisor will detect +`prepare` statements and issue those to all database connections. All clients +will then be able to address those prepared statements by name when issuing +`execute` statements. ## Why do you route connections to a single Supavisor node when deployed as a cluster? -Supavisor can run as a cluster of nodes for high availability. The first node to receive a connection from a tenant spins up the connection pool on that node. Connections coming in to other nodes will route data do the owner node of the tenant pool. - -We could run one pool per node and divide the database connection pool by N nodes but then we'd have to keep connection counts to the database in sync across all nodes. While not impossible at all, there could be some delay here temporarily causing more connections to the database than we want. - -By running one pool on one node in a cluster for a tenant we can guarantee that the amount of connections to the database will be the `default_pool_size` set on the tenant. - -Also running N pools on N nodes for N clients will not scale horizontally as well because all nodes will be doing all the same work of issuing database connections to clients. While not a lot of overhead, at some point this won't scale and we'd have to run multiple independant clusters and route tenants to clusters to scale horizontally. +Supavisor can run as a cluster of nodes for high availability. The first node to +receive a connection from a tenant spins up the connection pool on that node. +Connections coming in to other nodes will route data do the owner node of the +tenant pool. + +We could run one pool per node and divide the database connection pool by N +nodes but then we'd have to keep connection counts to the database in sync +across all nodes. While not impossible at all, there could be some delay here +temporarily causing more connections to the database than we want. + +By running one pool on one node in a cluster for a tenant we can guarantee that +the amount of connections to the database will be the `default_pool_size` set on +the tenant. + +Also running N pools on N nodes for N clients will not scale horizontally as +well because all nodes will be doing all the same work of issuing database +connections to clients. While not a lot of overhead, at some point this won't +scale and we'd have to run multiple independant clusters and route tenants to +clusters to scale horizontally. diff --git a/docs/migrating/pgbouncer.md b/docs/migrating/pgbouncer.md index 9dcbfbb3..02fe0af7 100644 --- a/docs/migrating/pgbouncer.md +++ b/docs/migrating/pgbouncer.md @@ -1,10 +1,14 @@ -Migrating from PgBouncer is straight forward once a Supavisor cluster is setup and a database has been added as a `tenant`. +Migrating from PgBouncer is straight forward once a Supavisor cluster is setup +and a database has been added as a `tenant`. -No application level code changes should be required other than a connection string change. Both `transaction` and `session` pool mode behavior for Supavisor is the same as PgBouncer. +No application level code changes should be required other than a connection +string change. Both `transaction` and `session` pool mode behavior for Supavisor +is the same as PgBouncer. One caveat during migration is running two connection poolers at the same time. -When rolling out a connection string change to your application you will momentarily need to support two connection pools to Postgres. +When rolling out a connection string change to your application you will +momentarily need to support two connection pools to Postgres. ## Check Postgres connection limit @@ -24,27 +28,34 @@ select count(*) from pg_stat_activity; ## Change Postgres `max_connections` -Based on the responses above configure the `default_pool_size` accordingly or increase your `max_connections` limit on Postgres to accomadate two connection poolers. +Based on the responses above configure the `default_pool_size` accordingly or +increase your `max_connections` limit on Postgres to accomadate two connection +poolers. -e.g if you're using 30 connections out of 100 and you set your `default_pool_size` to 20 you have enough connections to run a new Supavisor pool along side your PgBouncer pool. +e.g if you're using 30 connections out of 100 and you set your +`default_pool_size` to 20 you have enough connections to run a new Supavisor +pool along side your PgBouncer pool. -If you are using 90 connections out of 100 and your `default_pool_size` is set to 20 you will have problems during the deployment of your Supavisor connection string because you will hit your Postgres `max_connections` limit. +If you are using 90 connections out of 100 and your `default_pool_size` is set +to 20 you will have problems during the deployment of your Supavisor connection +string because you will hit your Postgres `max_connections` limit. ## Verify Supavisor connections -Once we've got Supavisor started we can verify it's using the amount of connections we set for `default_pool_size`: +Once we've got Supavisor started we can verify it's using the amount of +connections we set for `default_pool_size`: ```sql -select - count(*) as count, +SELECT + COUNT(*) as count, usename, application_name -from pg_stat_activity -where application_name ilike '%Supavisor%' -group by +FROM pg_stat_activity +WHERE application_name ILIKE '%Supavisor%' +GROUP BY usename, application_name -order by application_name desc; +ORDER BY application_name DESC; ``` ## Celebrate! diff --git a/docs/monitoring/metrics.md b/docs/monitoring/metrics.md index a845257d..56eb5544 100644 --- a/docs/monitoring/metrics.md +++ b/docs/monitoring/metrics.md @@ -1,4 +1,5 @@ -The metrics feature provides a range of metrics in the Prometheus format. The main modules involved in this implementation are: +The metrics feature provides a range of metrics in the Prometheus format. The +main modules involved in this implementation are: - `Supavisor.Monitoring.PromEx` - `Supavisor.PromEx.Plugins.OsMon` @@ -7,15 +8,23 @@ The metrics feature provides a range of metrics in the Prometheus format. The ma ## Endpoint -To use the metrics feature, send an HTTP request to the `/metrics` endpoint. The endpoint is secured using Bearer authentication, which requires a JSON Web Token (JWT) generated using the `METRICS_JWT_SECRET` environment variable. Make sure to set this environment variable with a secure secret key. +To use the metrics feature, send an HTTP request to the `/metrics` endpoint. The +endpoint is secured using Bearer authentication, which requires a JSON Web Token +(JWT) generated using the `METRICS_JWT_SECRET` environment variable. Make sure +to set this environment variable with a secure secret key. -When a node receives a request for metrics, it polls all nodes in the cluster, accumulates their metrics, and appends service tags such as region and host. To generate a valid JWT, use a library or tool that supports JWT creation with the HS256 algorithm and the `METRICS_JWT_SECRET` as the secret key. +When a node receives a request for metrics, it polls all nodes in the cluster, +accumulates their metrics, and appends service tags such as region and host. To +generate a valid JWT, use a library or tool that supports JWT creation with the +HS256 algorithm and the `METRICS_JWT_SECRET` as the secret key. -Remember to keep the `METRICS_JWT_SECRET` secure and only share it with authorized personnel who require access to the metrics endpoint. +Remember to keep the `METRICS_JWT_SECRET` secure and only share it with +authorized personnel who require access to the metrics endpoint. ### Filtered per tenant -Metrics endpoints filtered for specific tenants are available at their own endpoints: +Metrics endpoints filtered for specific tenants are available at their own +endpoints: ``` /metrics/:external_id @@ -30,13 +39,14 @@ The exposed metrics include the following: - Phoenix - Ecto - System monitoring metrics: - - CPU utilization - - RAM usage - - Load average (LA) + * CPU utilization + * RAM usage + * Load average (LA) ## Tenant metrics -Supavisor also tags many metrics with the `tenant` `external_id` so you can drill down to metrics per tenant: +Supavisor also tags many metrics with the `tenant` `external_id` so you can +drill down to metrics per tenant: - Pool checkout queue time - Number of connected clients diff --git a/docs/orms/prisma.md b/docs/orms/prisma.md index a959bc14..c15e887e 100644 --- a/docs/orms/prisma.md +++ b/docs/orms/prisma.md @@ -2,13 +2,16 @@ Connecting to a Postgres database with Prisma is easy. ## PgBouncer Compatability -Supavisor pool modes behave the same way as PgBouncer. You should be able to connect to Supavisor with the exact same connection string as you use for PgBouncer. +Supavisor pool modes behave the same way as PgBouncer. You should be able to +connect to Supavisor with the exact same connection string as you use for +PgBouncer. ## Named Prepared Statements Prisma will use named prepared statements to query Postgres by default. -To turn off named prepared statements use `pgbouncer=true` in your connection string with Prisma. +To turn off named prepared statements use `pgbouncer=true` in your connection +string with Prisma. The `pgbouncer=true` connection string parameter is compatable with Supavisor.