diff --git a/docs/source/routing/about-router.mdx b/docs/source/routing/about-router.mdx
index 55a962573e..6f9d428256 100644
--- a/docs/source/routing/about-router.mdx
+++ b/docs/source/routing/about-router.mdx
@@ -4,7 +4,6 @@ subtitle: Learn the basics about router features and deployment types
description: Apollo provides cloud and self-hosted GraphOS Router options. The router acts as an entry point to your GraphQL APIs and provides a unified interface for clients to interact with.
redirectFrom:
- /graphos/routing
- - /federation/query-plans
---
## What is GraphOS Router?
diff --git a/docs/source/routing/observability/error-reporting.mdx b/docs/source/routing/observability/error-reporting.mdx
new file mode 100644
index 0000000000..5b340e177c
--- /dev/null
+++ b/docs/source/routing/observability/error-reporting.mdx
@@ -0,0 +1,39 @@
+---
+title: Opt-In Error Reporting for Managed Federation
+subtitle: Configure out-of-band reporting
+description: Learn how to configure your managed gateway to send error reports to Apollo via out-of-band reporting, improving performance and reliability.
+---
+
+You can configure your managed gateway to send error reports to Apollo via _out-of-band reporting_. These reports help Apollo improve the performance and reliability of managed federation.
+
+## Enabling reporting
+
+To enable out-of-band error reporting, set the following environment variable in your gateway's environment:
+
+```bash
+APOLLO_OUT_OF_BAND_REPORTER_ENDPOINT=https://outofbandreporter.api.apollographql.com
+```
+
+The next time you start up your gateway, out-of-band error reporting is enabled.
+
+
+
+If you've enabled out-of-band reporting in the past, you might have specified a URL that is now deprecated. Double-check your configuration to make sure you've specified the URL listed above.
+
+
+
+## How it works
+
+Whenever your gateway fails to fetch its supergraph schema from Apollo due to an error, the out-of-band reporting mechanism sends an error report to Apollo as a GraphQL mutation.
+
+The report provides the following information as GraphQL variables:
+
+* The error code and message produced by the gateway
+* The HTTP Request URL and body
+* The HTTP Response status code and body
+* The `started-at` and `end-at` times of the request
+
+It also provides the following HTTP headers:
+
+* `apollographql-client-name`: The name of the GraphQL client used by the Gateway
+* `apollographql-client-version`: The version number of the GraphQL client used by the Gateway
diff --git a/docs/source/routing/observability/federated-trace-data.mdx b/docs/source/routing/observability/federated-trace-data.mdx
new file mode 100644
index 0000000000..c5d434c938
--- /dev/null
+++ b/docs/source/routing/observability/federated-trace-data.mdx
@@ -0,0 +1,81 @@
+---
+title: Federated Trace Data
+subtitle: Reporting fine-grained performance metrics
+description: Explore how federated traces enable fine-grained performance metrics reporting. Learn about the reporting flow and how tracing data is exposed and aggregated.
+---
+
+One of the many benefits of using GraphQL as an API layer is that it enables fine-grained, field-level [tracing](/graphos/metrics/#resolver-level-traces) of every executed operation. The [GraphOS platform](/graphos/) can consume and aggregate these traces to provide detailed insights into your supergraph's usage and performance.
+
+Your supergraph's router can generate _federated traces_ and [report them to GraphOS](/graphos/metrics/sending-operation-metrics). A federated trace is assembled from timing and error information provided by each subgraph that helps resolve a particular operation.
+
+## Reporting flow
+
+The overall flow of a federated trace is as follows:
+
+1. The router receives an operation from a client.
+2. The router generates a [query plan](/federation/query-plans) for the operation and delegates sub-queries to individual subgraphs.
+3. Each queried subgraph returns response data to the router.
+ - The `extensions` field of each response includes trace data for the corresponding sub-query.
+ - The subgraph must support the federated trace format to include trace data in its response. See [this section](#in-your-subgraphs).
+4. The router collects the set of sub-query traces from subgraphs and arranges them in the shape of the query plan.
+5. The router [reports the federated trace to GraphOS](/graphos/metrics/sending-operation-metrics/) for processing.
+
+In summary, subgraphs report timing and error information to the router, and the router is responsible for aggregating those metrics and reporting them to GraphOS.
+
+## Enabling federated tracing
+
+### In your subgraphs
+
+For a subgraph to include trace data in its responses to your router, it must use a subgraph-compatible library that supports the trace format.
+
+To check whether your subgraph library supports federated tracing, see the `FEDERATED TRACING` entry for the library on [this page](/federation/building-supergraphs/supported-subgraphs/).
+
+If your library does support federated tracing, see its documentation to learn how to enable the feature.
+
+
+
+If your subgraph uses Apollo Server with `@apollo/subgraph`, federated tracing is enabled by default. You can customize this behavior with Apollo Server's [inline trace plugin](/apollo-server/api/plugin/inline-trace).
+
+
+
+### In the Apollo Router
+
+See [Sending Apollo Router usage data to GraphOS](/router/configuration/telemetry/apollo-telemetry).
+
+### In `@apollo/gateway`
+
+You can use the `@apollo/server` package's [built-in usage reporting plugin](/apollo-server/api/plugin/usage-reporting) to enable federated tracing for your gateway. Provide an API key to your gateway via the `APOLLO_KEY` environment variable for the gateway to report metrics to the default ingress. To ensure that subgraphs do not report metrics as well, either do not provide them with an `APOLLO_KEY` or install the [`ApolloServerPluginUsageReportingDisabled` plugin](https://www.apollographql.com/docs/apollo-server/api/plugin/usage-reporting/) in your `ApolloServer`.
+
+These options will cause the Apollo gateway to collect tracing information from the underlying subgraphs and pass them on, along with the query plan, to the Apollo metrics ingress.
+
+
+
+By default, metrics are reported to the `current` GraphOS variant. To change the variant for reporting, set the `APOLLO_GRAPH_VARIANT` environment variable.
+
+
+
+## How tracing data is exposed from a subgraph
+
+
+
+This section explains how your router communicates with subgraphs around encoded tracing information. It is not necessary to understand in order to enable federated tracing.
+
+
+
+Your router inspects the `extensions` field of all subgraph responses for the presence of an `ftv1` field. This field contains a representation of the tracing information for the sub-query that was executed against the subgraph, sent as the Base64 encoding of the [protobuf representation](https://github.com/apollographql/apollo-server/blob/main/packages/usage-reporting-protobuf/src/reports.proto) of the trace.
+
+To obtain this information from a subgraph, the router includes the header pair `'apollo-federation-include-trace': 'ftv1'` in its request (if it's [configured to collect trace data](#in-the-apollo-router)). If the subgraph [supports federated traces](#in-your-subgraphs), it attaches tracing information in the `extensions` field of its response.
+
+## How traces are constructed and aggregated
+
+Your router constructs traces in the shape of the [query plan](/federation/query-plans/), embedding an individual `Trace` for each fetch that is performed in the query plan. This indicates the sub-query traces, as well as which order they were fetched from the underlying subgraphs.
+
+The field-level statistics that Apollo aggregates from these traces are collected for the fields over which the operation was executed in the subgraphs. In other words, field stats are collected based on the operations the query planner makes, instead of the operations that the clients make. On the other hand, operation-level statistics are aggregated over the operations executed by the client, which means that even if query-planning changes, statistics still correspond to the same client-delivered operation.
+
+## How errors work
+
+The Apollo Platform provides functionality to modify error details for the client, via the [`formatError`](/apollo-server/data/errors#for-client-responses) option. Additionally, there is functionality to support modifying error details for the metrics ingress, via the [`sendErrors`](/apollo-server/data/errors#for-apollo-studio-reporting) option to the [inline trace plugin](/apollo-server/api/plugin/inline-trace/).
+
+When modifying errors for the client, you might want to use this option to hide implementation details, like database errors, from your users. When modifying errors for reporting, you might want to obfuscate or redact personal information, like user IDs or emails.
+
+Since federated metrics collection works by collecting latency and error information from a set of distributed subgraphs, these options are respected from those subgraphs as well as from the router. Subgraphs embed errors in their `ftv1` extension after the `rewriteError` method (passed to the inline trace plugin in the subgraph, not the usage reporting plugin in the gateway!) is applied, and the gateway only reports the errors that are sent via that extension, ignoring the format that downstream errors are reported to end users. This functionality enables subgraph implementers to determine how error information should be displayed to both users and in metrics without needing the gateway to contain any logic that might be subgraph-specific.
diff --git a/docs/source/routing/observability/otel.mdx b/docs/source/routing/observability/otel.mdx
new file mode 100644
index 0000000000..7c0f3585c0
--- /dev/null
+++ b/docs/source/routing/observability/otel.mdx
@@ -0,0 +1,285 @@
+---
+title: OpenTelemetry in Apollo Federation
+sidebar_title: OpenTelemetry
+subtitle: Configure your federated graph to emit logs, traces, and metrics
+description: Learn how to configure your federated GraphQL services to generate and process telemetry data, including logs, traces, and metrics.
+---
+
+[OpenTelemetry](https://opentelemetry.io/) is a collection of open-source tools for generating and processing telemetry data (such as logs, traces, and metrics) from different systems in a generic and consistent way.
+
+You can configure your gateway, your individual subgraphs, or even a monolothic Apollo Server instance to emit telemetry related to processing GraphQL operations.
+
+Additionally, the `@apollo/gateway` library provides built-in OpenTelemetry instrumentation to emit [gateway-specific spans](#gateway-specific-spans) for operation traces.
+
+If you are using GraphOS Router, it comes [pre-built with support for OpenTelemetry](/graphos/routing/observability/telemetry).
+
+
+
+GraphOS Studio does not currently consume OpenTelemetry-formatted data. To push trace data to Studio, see [Federated trace data](/graphos/routing/observability/federated-trace-data).
+
+You should configure OpenTelemetry if you want to push trace data to an OpenTelemetry-compatible system, such as [Zipkin](https://zipkin.io/) or [Jaeger](https://www.jaegertracing.io/).
+
+
+
+## Setup
+
+### 1. Install required libraries
+
+To use OpenTelemetry in your application, you need to install a baseline set of `@opentelemetry` Node.js libraries. This set differs slightly depending on whether you're setting up your federated gateway or a subgraph/monolith.
+
+
+
+```bash
+npm install \
+ @opentelemetry/api@1.0 \
+ @opentelemetry/core@1.0 \
+ @opentelemetry/resources@1.0 \
+ @opentelemetry/sdk-trace-base@1.0 \
+ @opentelemetry/sdk-trace-node@1.0 \
+ @opentelemetry/instrumentation-http@0.27 \
+ @opentelemetry/instrumentation-express@0.28
+```
+
+
+
+
+
+```bash
+npm install \
+ @opentelemetry/api@1.0 \
+ @opentelemetry/core@1.0 \
+ @opentelemetry/resources@1.0 \
+ @opentelemetry/sdk-trace-base@1.0 \
+ @opentelemetry/sdk-trace-node@1.0 \
+ @opentelemetry/instrumentation@0.27 \
+ @opentelemetry/instrumentation-http@0.27 \
+ @opentelemetry/instrumentation-express@0.28 \
+ @opentelemetry/instrumentation-graphql@0.27
+```
+
+
+
+Most importantly, subgraphs and monoliths must install `@opentelemetry/instrumentation-graphql`, and gateways must not install it.
+
+As shown above, most `@opentelemetry` libraries have reached `1.0`. The instrumentation packages listed above are compatible at the time of this writing.
+
+#### Update `@apollo/gateway`
+
+If you're using OpenTelemetry in your federated gateway, also update the `@apollo/gateway` library to version `0.31.1` or later to add support for [gateway-specific spans](#gateway-specific-spans).
+
+### 2. Configure instrumentation
+
+Next, update your application to configure your OpenTelemetry instrumentation as early as possible in your app's execution. This must occur before you even import `@apollo/server`, `express`, or `http`. Otherwise, your trace data will be incomplete.
+
+We recommend putting this configuration in its own file, which you import at the very top of `index.js`. A sample file is provided below (note the lines that should either be deleted or uncommented).
+
+```js title="open-telemetry.js"
+// Import required symbols
+const { Resource } = require('@opentelemetry/resources');
+const { SimpleSpanProcessor, ConsoleSpanExporter } = require ("@opentelemetry/sdk-trace-base");
+const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
+const { registerInstrumentations } = require('@opentelemetry/instrumentation');
+const { HttpInstrumentation } = require ('@opentelemetry/instrumentation-http');
+const { ExpressInstrumentation } = require ('@opentelemetry/instrumentation-express');
+// DELETE IF SETTING UP A GATEWAY, UNCOMMENT OTHERWISE
+// const { GraphQLInstrumentation } = require ('@opentelemetry/instrumentation-graphql');
+
+// Register server-related instrumentation
+registerInstrumentations({
+ instrumentations: [
+ new HttpInstrumentation(),
+ new ExpressInstrumentation(),
+ // DELETE IF SETTING UP A GATEWAY, UNCOMMENT OTHERWISE
+ //new GraphQLInstrumentation()
+ ]
+});
+
+// Initialize provider and identify this particular service
+// (in this case, we're implementing a federated gateway)
+const provider = new NodeTracerProvider({
+ resource: Resource.default().merge(new Resource({
+ // Replace with any string to identify this service in your system
+ "service.name": "gateway",
+ })),
+});
+
+// Configure a test exporter to print all traces to the console
+const consoleExporter = new ConsoleSpanExporter();
+provider.addSpanProcessor(
+ new SimpleSpanProcessor(consoleExporter)
+);
+
+// Register the provider to begin tracing
+provider.register();
+```
+
+For now, this code does not push trace data to an external system. Instead, it prints that data to the console for debugging purposes.
+
+
+After you make these changes to your app, start it up locally. It should begin printing trace data similar to the following:
+
+
+
+```js
+{
+ traceId: '0ed36c42718622cc726a661a3328aa61',
+ parentId: undefined,
+ name: 'HTTP POST',
+ id: '36c6a3ae19563ec3',
+ kind: 1,
+ timestamp: 1624650903925787,
+ duration: 26793,
+ attributes: {
+ 'http.url': 'http://localhost:4000/',
+ 'http.host': 'localhost:4000',
+ 'net.host.name': 'localhost',
+ 'http.method': 'POST',
+ 'http.route': '',
+ 'http.target': '/',
+ 'http.user_agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36',
+ 'http.request_content_length_uncompressed': 1468,
+ 'http.flavor': '1.1',
+ 'net.transport': 'ip_tcp',
+ 'net.host.ip': '::1',
+ 'net.host.port': 4000,
+ 'net.peer.ip': '::1',
+ 'net.peer.port': 39722,
+ 'http.status_code': 200,
+ 'http.status_text': 'OK'
+ },
+ status: { code: 1 },
+ events: []
+}
+
+{
+ traceId: '0ed36c42718622cc726a661a3328aa61',
+ parentId: '36c6a3ae19563ec3',
+ name: 'middleware - ',
+ id: '3776786d86f24124',
+ kind: 0,
+ timestamp: 1624650903934147,
+ duration: 63,
+ attributes: {
+ 'http.route': '/',
+ 'express.name': '',
+ 'express.type': 'middleware'
+ },
+ status: { code: 0 },
+ events: []
+}
+```
+
+
+
+Nice! Next, we can modify this code to begin pushing trace data to an external service, such as Zipkin or Jaeger.
+
+### 3. Push trace data to a tracing system
+
+Next, let's modify the code in the [previous step](#2-configure-instrumentation) to instead push traces to a locally running instance of [Zipkin](https://zipkin.io/).
+
+
+
+To run Zipkin locally, [see the quickstart](https://zipkin.io/pages/quickstart.html). If you want to use a different tracing system, consult the documentation for that system.
+
+
+
+First, we need to replace our `ConsoleSpanExporter` (which prints traces to the terminal) with a `ZipkinExporter`, which specifically pushes trace data to a running Zipkin instance.
+
+Install the following additional library:
+
+```bash
+npm install @opentelemetry/exporter-zipkin@1.0
+```
+
+Then, import the `ZipkinExporter` in your dedicated OpenTelemetry file:
+
+```js title="open-telemetry.js"
+const { ZipkinExporter } = require("@opentelemetry/exporter-zipkin");
+```
+
+Now we can replace our `ConsoleSpanExporter` with a `ZipkinExporter`. Replace lines 31-34 of the code in [the previous step](#2-configure-instrumentation) with the following:
+
+```js
+// Configure an exporter that pushes all traces to Zipkin
+// (This assumes Zipkin is running on localhost at the
+// default port of 9411)
+const zipkinExporter = new ZipkinExporter({
+ // url: set_this_if_not_running_zipkin_locally
+});
+provider.addSpanProcessor(
+ new SimpleSpanProcessor(zipkinExporter)
+);
+```
+
+Now, open Zipkin in your browser at `http://localhost:9411`. You should now be able to query recent trace data in the UI!
+
+You can show the details of any operation and see a breakdown of its processing timeline by span.
+
+### 4. Update for production readiness
+
+Our example telemetry configuration assumes that Zipkin is running locally, and that we want to process every span individually as it's emitted.
+
+To prepare for production, we'll want to optimize performance by sending our traces to an [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) using the `OTLPTraceExporter` and replace our `SimpleSpanProcessor` with a `BatchSpanProcessor`.
+The Collector should be deployed as a local sidecar agent to buffer traces before they're sent along to their final destination.
+See the [getting started docs](https://opentelemetry.io/docs/collector/getting-started/) for an overview.
+
+```bash
+npm install @opentelemetry/exporter-trace-otlp-http@0.27
+```
+
+Then, import the `OTLPTraceExporter` and `BatchSpanProcessor` in your dedicated OpenTelemetry file:
+
+
+```js:title=open-telemetry.js
+const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-http");
+const { BatchSpanProcessor } = require("@opentelemetry/sdk-trace-base");
+```
+
+Now we can replace our `ZipkinExporter` with an `OTLPTraceExporter`. We can also replace our `SimpleSpanProcessor` with a `BatchSpanProcessor`. Replace lines 4-9 of the code in [the previous step](#3-push-trace-data-to-a-tracing-system) with the following:
+
+```js
+// Configure an exporter that pushes all traces to a Collector
+// (This assumes the Collector is running on the default url
+// of http://localhost:4318/v1/traces)
+const collectorTraceExporter = new OTLPTraceExporter();
+provider.addSpanProcessor(
+ new BatchSpanProcessor(collectorTraceExporter, {
+ maxQueueSize: 1000,
+ scheduledDelayMillis: 1000,
+ }),
+);
+```
+
+You can learn more about using the `OTLPTraceExporter` in the [instrumentation docs](https://opentelemetry.io/docs/instrumentation/js/exporters/).
+
+## GraphQL-specific spans
+
+The `@opentelemetry/instrumentation-graphql` library enables subgraphs and monoliths to emit the following spans as part of [OpenTelemetry traces](https://opentelemetry.io/docs/concepts/data-sources/#traces):
+
+| Name | Description |
+|------|-------------|
+| `graphql.parse` | The amount of time the server spent parsing an operation string. |
+| `graphql.validate` | The amount of time the server spent validating an operation string. |
+| `graphql.execute` | The total amount of time the server spent executing an operation. |
+| `graphql.resolve` | The amount of time the server spent resolving a particular field. |
+
+Note that not every GraphQL span appears in every operation trace. This is because Apollo server can skip parsing or validating an operation string if that string is available in the operation cache.
+
+
+
+Federated gateways must not install the `@opentelemetry/instrumentation-graphql` library, so these spans are not included in its traces.
+
+
+
+## Gateway-specific spans
+
+The `@apollo/gateway` library emits the following spans as part of [OpenTelemetry traces](https://opentelemetry.io/docs/concepts/data-sources/#traces):
+
+| Name | Description |
+|------|-------------|
+| `gateway.request` | The total amount of time the gateway spent serving a request. |
+| `gateway.validate` | The amount of time the gateway spent validating a GraphQL operation string. |
+| `gateway.plan` | The amount of time the gateway spent generating a query plan for a validated operation. |
+| `gateway.execute` | The amount of time the gateway spent executing operations on subgraphs. |
+| `gateway.fetch` | The amount of time the gateway spent fetching data from a particular subgraph. |
+| `gateway.postprocessing` | The amount of time the gateway spent composing a complete response from individual subgraph responses. |
diff --git a/docs/source/routing/uplink.mdx b/docs/source/routing/uplink.mdx
new file mode 100644
index 0000000000..025c834533
--- /dev/null
+++ b/docs/source/routing/uplink.mdx
@@ -0,0 +1,172 @@
+---
+title: Apollo Uplink
+subtitle: Fetch your managed router's configuration
+description: Learn how to configure Apollo Uplink for managed GraphQL federation, including polling behavior and Uplink URLs.
+---
+
+When using [managed federation](/federation/managed-federation/overview/), your supergraph's router by default regularly polls an endpoint called _Apollo Uplink_ for its latest supergraph schema and other configuration:
+
+```mermaid
+graph LR;
+ subgraph "Your infrastructure"
+ serviceA[Products subgraph];
+ serviceB[Reviews subgraph];
+ router([Router]);
+ end
+ subgraph "Apollo GraphOS"
+ registry{{Schema Registry}};
+ uplink{{Uplink}}
+ end
+ serviceA & serviceB -->|"Publishes
schema"| registry;
+ registry -->|"Updates
config"| uplink;
+ router -->|Polls for config changes| uplink;
+ class registry secondary;
+ class uplink secondary;
+```
+
+If you're using [Enterprise features](https://www.apollographql.com/pricing), Uplink also serves your router's license.
+
+To maximize uptime, Uplink is hosted simultaneously at two endpoints, one in GCP and one in AWS:
+
+- GCP: `https://uplink.api.apollographql.com/`
+- AWS: `https://aws.uplink.api.apollographql.com/`
+
+## Default polling behavior
+
+### GraphOS Router
+
+If you use the GraphOS Router with managed federation, it polls Uplink every ten seconds by default. Each time, it cycles through Uplink endpoints until it receives a response.
+
+Whenever a poll request times out or otherwise fails (the default timeout is thirty seconds), the router continues polling as usual at the next interval. In the meantime, it continues using its most recent successfully obtained configuration.
+
+### `@apollo/gateway`
+
+If you use the `@apollo/gateway` library with managed federation, your gateway polls Uplink every ten seconds by default. Each time, it cycles through Uplink endpoints until it receives a response.
+
+
+
+Versions of `@apollo/gateway` prior to v0.45.0 don't support multiple Uplink endpoints and only use the GCP endpoint by default.
+
+
+
+Whenever a poll request fails, the gateway retries that request (again, using round robin). It continues retrying until a request succeeds, or until reaching the defined maximum number of retries.
+
+Even if a particular poll request fails all of its retries, the gateway continues polling as usual at the next interval (with its own set of retries if needed). In the meantime, the gateway continues using its most recent successfully obtained configuration.
+
+## Configuring polling behavior
+
+You can configure the following aspects of your router's Uplink polling behavior:
+
+- The interval at which your router polls (minimum ten seconds)
+- The list of Uplink URLs your router uses
+- The request timeout for each poll request (GraphOS Router only)
+ - For `@apollo/gateway`, this value is always thirty seconds.
+- The number of retries performed for a failed poll request (`@apollo/gateway` only)
+ - The GraphOS Router does not perform retries for a failed poll request. It continues polling at the next interval.
+
+### GraphOS Router
+
+You configure Uplink polling for the GraphOS Router by providing certain command-line options when running the router executable. These options all start with `--apollo-uplink`.
+
+[See the GraphOS Router docs](/graphos/reference/router/configuration#--apollo-uplink-endpoints).
+
+### `@apollo/gateway`
+
+#### Retry limit
+
+You can configure how many times your gateway retries a single failed poll request like so:
+
+```js {6}
+const { ApolloGateway } = require('@apollo/gateway');
+
+// ...
+
+const gateway = new ApolloGateway({
+ uplinkMaxRetries: 2
+});
+```
+
+By default, the gateway retries a single poll request a number of times equal to three times the number of [Uplink URLs](#uplink-urls-advanced) (this is almost always `6` times).
+
+Even if a particular poll request fails all of its retries, the gateway continues polling as usual at the next interval (with its own set of retries if needed). In the meantime, the gateway continues using its most recently obtained configuration.
+
+#### Poll interval
+
+You can configure the interval at which your gateway polls Apollo Uplink like so:
+
+```js {6}
+const { ApolloGateway } = require('@apollo/gateway');
+
+// ...
+
+const gateway = new ApolloGateway({
+ pollIntervalInMs: 15000 // 15 seconds
+});
+```
+
+The `pollIntervalInMs` option specifies the polling interval in milliseconds. This value must be at least `10000` (which is also the default value).
+
+#### Uplink URLs (advanced)
+
+
+
+Most gateways never need to configure their list of Apollo Uplink URLs. Consult this section only if advised to do so.
+
+
+
+You can provide a custom list of URLs for the gateway to use when polling Uplink. You can provide this list either in the `ApolloGateway` constructor or as an environment variable.
+
+##### `ApolloGateway` constructor
+
+Provide a custom list of Uplink URLs to the `ApolloGateway` constructor like so:
+
+```js {6-9}
+const { ApolloGateway } = require('@apollo/gateway');
+
+// ...
+
+const gateway = new ApolloGateway({
+ uplinkEndpoints: [
+ // Omits AWS endpoint
+ 'https://uplink.api.apollographql.com/'
+ ]
+});
+```
+
+This example omits the AWS endpoint, which means it's never polled.
+
+
+
+If you also provide a list of endpoints via [environment variable](#environment-variable), the environment variable takes precedence.
+
+
+
+##### Environment variable
+
+You can provide a comma-separated list of Uplink URLs as the value of the `APOLLO_SCHEMA_CONFIG_DELIVERY_ENDPOINT` environment variable in your gateway's environment:
+
+```bash
+APOLLO_SCHEMA_CONFIG_DELIVERY_ENDPOINT=https://aws.uplink.api.apollographql.com/,https://uplink.api.apollographql.com/
+```
+
+## Schema size limit
+
+Supergraph schemas provided by Uplink cannot exceed 6MB in size. The vast majority of supergraph schemas are well below this limit.
+
+If your supergraph schema does exceed 6MB, you can set up a [build status webhook](/graphos/platform/insights/notifications/build-status) for your graph. Whenever you're notified of a successful supergraph schema composition, your webhook can fetch the latest supergraph schema [via the Rover CLI](/rover/commands/supergraphs#supergraph-fetch).
+
+## Bypassing Uplink
+
+
+
+In advanced use cases, you may want your router to use a supergraph schema different than the latest validated schema provided by Uplink. For example, you have different deployment environments for the same [graph variant](/graphos/get-started/concepts/graphs-and-variants#variants), and you want everything that managed federation provides except for your routers to use supergraph schemas specific to their deployment environment.
+
+For this scenario, you can follow a workflow that, instead of retrieving supergraph schemas from Uplink, uses the [GraphOS Platform API](/graphos/reference/platform-api) to retrieve a supergraph schema for a specific [GraphOS launch](/graphos/platform/schema-management/delivery/launch). The workflow, in summary:
+
+1. When deploying your graphs, publish your subgraphs in a batch using the GraphOS Platform API.
+ * The Platform API triggers a launch (and possible downstream launches for contracts) and returns the launch ID (and downstream launch IDs, if necessary).
+1. Poll for the launch status, until the launch (and all downstream launches) has completed successfully.
+1. Retrieve the supergraph schema of the successful launch by calling the Platform API with the launch ID.
+1. Set or "pin" the supergraph schema to your routers by deploying them with the [`--supergraph` or `-s` option](/graphos/reference/router/configuration#-s----supergraph).
+
+For an example with operations calling the Platform API, see a [blue-green deployment example](/graphos/schema-design/guides/production-readiness/best-practices#example-blue-green-deployment).