diff --git a/arbitrum-docs/node-running/gentle-introduction-run-node.md b/arbitrum-docs/node-running/gentle-introduction-run-node.md index 244cc011c..8434756c9 100644 --- a/arbitrum-docs/node-running/gentle-introduction-run-node.md +++ b/arbitrum-docs/node-running/gentle-introduction-run-node.md @@ -13,6 +13,7 @@ In order to be able to _interact with_ or _build applications on_ any of the Arb Here, you can find resources that help you run different types of Arbitrum nodes: - Step-by-step instructions for running different Arbitrum nodes, including [full Nitro node](./how-tos/running-a-full-node.mdx), [full Classic node](./how-tos/running-a-classic-node.mdx), [local dev node](./how-tos/local-dev-node.mdx), [feed relay](./how-tos/running-a-feed-relay.mdx), and [validator](./how-tos/running-a-validator.mdx) -- Step-by-step instructions for how to [read the sequencer feed](./how-tos/read-sequencer-feed.md), [build the Nitro locally](./how-tos/build-nitro-locally.md), [run a DAS](./how-tos/running-a-daserver.mdx) and [run the sequencer coordinator manager UI tool](./how-tos/running-a-sequencer-coordinator-manager.mdx) +- Step-by-step instructions for how to [read the sequencer feed](./how-tos/read-sequencer-feed.md), [build the Nitro locally](./how-tos/build-nitro-locally.md) and [run the sequencer coordinator manager UI tool](./how-tos/running-a-sequencer-coordinator-manager.mdx) +- Step-by-step instructions for [how to configure a Data Availability Committee](/node-running/how-tos/data-availability-committee/introduction.mdx) - [Troubleshooting page](./troubleshooting-running-nodes.md) - [Frequently asked questions](./faq.md) diff --git a/arbitrum-docs/node-running/how-tos/data-availability-committee/_category_.yml b/arbitrum-docs/node-running/how-tos/data-availability-committee/_category_.yml new file mode 100644 index 000000000..035c79606 --- /dev/null +++ b/arbitrum-docs/node-running/how-tos/data-availability-committee/_category_.yml @@ -0,0 +1,4 @@ +label: "Configure a Data Availability Committee (DAC)" +collapsible: true +collapsed: true +position: 13 diff --git a/arbitrum-docs/node-running/how-tos/data-availability-committee/configure-the-dac-in-your-chain.mdx b/arbitrum-docs/node-running/how-tos/data-availability-committee/configure-the-dac-in-your-chain.mdx new file mode 100644 index 000000000..e6a34f0ba --- /dev/null +++ b/arbitrum-docs/node-running/how-tos/data-availability-committee/configure-the-dac-in-your-chain.mdx @@ -0,0 +1,12 @@ +--- +title: 'How to configure the Data Availability Committee (DAC) in your chain' +description: This how-to will help you configure the DAC in your chain. +author: jose-franco +sidebar_label: Configure the Data Availability Committee (DAC) in your chain +sidebar_position: 4 +content-type: how-to +--- + +import UnderConstructionPartial from '../../../partials/_under-construction-banner-partial.md'; + + diff --git a/arbitrum-docs/node-running/how-tos/data-availability-committee/deploy-a-das.mdx b/arbitrum-docs/node-running/how-tos/data-availability-committee/deploy-a-das.mdx new file mode 100644 index 000000000..2d62c65ea --- /dev/null +++ b/arbitrum-docs/node-running/how-tos/data-availability-committee/deploy-a-das.mdx @@ -0,0 +1,452 @@ +--- +title: 'How to deploy a Data Availability Server (DAS)' +description: This how-to will help you deploy a Data Availability Server (DAS) +author: jose-franco +sidebar_label: Deploy a Data Availability Server (DAS) +sidebar_position: 2 +content-type: how-to +--- + +import PublicPreviewBannerPartial from '../../../partials/_public-preview-banner-partial.md'; + + + +

+ AnyTrust chains rely on an external Data + Availability Committee (DAC) to store data and provide it on-demand instead of using its{' '} + parent chain as the Data Availability (DA) layer. The + members of the DAC run a Data Availability Server (DAS) to handle these operations. +

+ +In this how-to, you'll learn how to deploy a DAS that exposes: + +1. **An RPC interface** that the sequencer uses to store batches of data on the DAS. +2. **An HTTP REST interface** that lets the DAS respond to requests for those batches of data. + +For more information related to configuring a DAC, refer to the _[Introduction](./introduction.mdx)_. + +This how-to assumes that you're familiar with: + +- The DAC's role in the AnyTrust protocol. Refer to _[Inside AnyTrust](/inside-anytrust.mdx)_ for a refresher. +- [Kubernetes](https://kubernetes.io/). The examples in this guide use Kubernetes to containerize your DAS. + +## How does a DAS work? + +A Data Availability Server (DAS) allows storage and retrieval of transaction data batches for an AnyTrust chain. It's the software that the members of the DAC run in order to provide the Data Availability service. + +DA servers accept time-limited requests to store data batches from the sequencer of an AnyTrust chain, and return a signed certificate promising to store that data during the established time. They also respond to requests to retrieve the data batches. + +## Configuration options + +When setting up a DAS, there are certain options you can configure to suit your infrastructure needs: + +### Interfaces available in a DAS + +There are two main interfaces that can be enabled in a DAS: an **RPC interface** to store data in the DAS, intended to be used only by the AnyTrust sequencer; and a **REST interface** that supports only GET operations and is intended for public use. + +DA servers listen on two primary interfaces: + +1. Its **RPC interface** listens for `das_store` RPC messages coming from the sequencer. Messages are signed by the sequencer, and the DAS checks this signature upon receipt. +2. Its **REST interface** respond to HTTP GET requests pointed at `/get-by-hash/`. This uses the hash of the data batch as a unique identifier, and will always return the same data for a given hash. + +**IPFS** is an alternative interface that serves requests for batch retrieval. A DAS can be configured to sync and pin batches to its local IPFS repository, then act as a node in the IPFS peer-to-peer network. The advantage of using IPFS is that the Nitro node will use the batch hashes to find the batch data on the IPFS peer-to-peer network. Depending on the network configuration, that Nitro node may then also act as an IPFS node serving the batch data. + +### Storage options + +A DAS can be configured to use one or more of four storage backends: + +- [AWS S3](https://aws.amazon.com/s3/) bucket +- Files on local disk +- [Badger](https://dgraph.io/docs/badger/) database on local disk +- [IPFS](https://ipfs.tech/) + +If more than one option is selected, store requests must succeed to all of them for it to be considered successful, while retrieve requests only require one of them to succeed. + +If there are other storage backends you'd like us to support, send us a message on [Discord](https://discord.gg/arbitrum), or contribute directly to the [Nitro repository](https://github.com/OffchainLabs/nitro/). + +### Caching + +An in-memory cache can be enabled to avoid needing to access underlying storage for retrieve requests. + +Requests sent to the REST interface (to retrieve data from the DAS) always return the same data for a given hash, so the result is cacheable. It also contains a `cache-control` header specifying that the object is immutable and to cache it for up to 28 days. + +### State synchronization + +DA servers also have an optional REST aggregator which, when a data batch is not found in cache or storage, requests that batch to other REST servers defined in a list and stores that batch upon receiving it. This is how a DAS that misses storing a batch (the AnyTrust protocol doesn't require all of them to report success in order to post the batch's certificate to the parent chain) can automatically repair gaps in the data it stores, and also how a [mirror DAS](#running-a-mirror-das) can sync its data. A public list of REST endpoints is published online, which the DAS can be configured to download and use, and additional endpoints can be specified in the configuration. + +## How to deploy the DAS + +### Step 0: Prerequisites + +Gather the following information: + +- The latest Nitro docker image: `@latestNitroNodeImage@` +- An RPC endpoint for the parent chain. It is recommended to use a [third-party provider RPC](/node-running/node-providers.mdx#third-party-rpc-providers) or [run your own node](/node-running/how-tos/running-an-orbit-node.mdx) to prevent being rate limited. +- The SequencerInbox contract address in the parent chain. +- If you wish to configure a [REST aggregator for your DAS](#state-synchronization), you'll need the URL where the list of REST endpoints is kept. + + + +### Step 1: Set up a persistent volume + +First, we'll set up a volume to store the DAS database and the BLS keypair that we generate in the next step. + +In k8s, we can use a configuration like this: + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: +name: das-server +spec: +accessModes: + - ReadWriteOnce +resources: + requests: + storage: 200Gi +storageClassName: gp2 +``` + +### Step 2: Generate the BLS keypair + +Next, we'll generate a BLS keypair. The private key will be used to sign the Data Availability Certificates (DACert) when receiving requests to store data, and the public key will be used to prove that the DACert was signed by the DAS. + +The BLS keypair must be generated using the `datool keygen` utility. Later, it will be passed to the DAS by file or command line. + +When running the key generator, we'll specify the `--dir` parameter with the absolute path to the directory inside the volume to store the keys in. That directory will need to exist before running the tool. + +Here's an example of how to use a k8s deployment to run the `datool keygen` utility and store the key on the volume that we created in the previous step (and that will be used by the DAS in the next step). Note that after this deployment has run once, the deployment can be torn down and deleted: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: +name: das-server +spec: +replicas: 1 +selector: + matchLabels: + app: das-server +template: + metadata: + labels: + app: das-server + spec: + containers: + - command: + - bash + - -c + - | + mkdir -p /home/user/data/keys + /usr/local/bin/datool keygen --dir /home/user/data/keys + sleep infinity + image: @latestNitroNodeImage@ + imagePullPolicy: Always + resources: + limits: + cpu: "4" + memory: 10Gi + requests: + cpu: "4" + memory: 10Gi + ports: + - containerPort: 9876 + protocol: TCP + volumeMounts: + - mountPath: /home/user/data/ + name: data + volumes: + - name: data + persistentVolumeClaim: + claimName: das-server +``` + +### Step 3: Deploy the DAS + +To run the DAS, we'll use the `daserver` tool and we'll configure the following parameters: + +| Parameter | Description | +| ------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | +| --data-availability.parent-chain-node-url | RPC endpoint of a parent chain node | +| --data-availability.sequencer-inbox-address | Address of the SequencerInbox in the parent chain | +| --data-availability.key.key-dir | The absolute path to the directory inside the volume to read the BLS keypair ('das_bls.pub' and 'das_bls') from | +| --enable-rpc | Enables the HTTP-RPC server listening on --rpc-addr and --rpc-port | +| --rpc-addr | HTTP-RPC server listening interface (default "localhost") | +| --rpc-port | (Optional) HTTP-RPC server listening port (default 9876) | +| --enable-rest | Enables the REST server listening on --rest-addr and --rest-port | +| --rest-addr | REST server listening interface (default "localhost") | +| --rest-port | (Optional) REST server listening port (default 9877) | +| --log-level | Log level: 1 - ERROR, 2 - WARN, 3 - INFO, 4 - DEBUG, 5 - TRACE (default 3) | + +To enable caching, you can use the following parameters: + +| Parameter | Description | +| ------------------------------------------ | ----------------------------------------------------------------------- | +| --data-availability.local-cache.enable | Enables local in-memory caching of sequencer batch data | +| --data-availability.local-cache.expiration | Expiration time for in-memory cached sequencer batches (default 1h0m0s) | + +To enable the REST aggregator, use the following parameters: + +| Parameter | Description | +| --------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| --data-availability.rest-aggregator.enable | Enables retrieval of sequencer batch data from a list of remote REST endpoints | +| --data-availability.rest-aggregator.online-url-list | A URL to a list of URLs of REST DAS endpoints that is checked at startup. This option is additive with the urls option | +| --data-availability.rest-aggregator.urls | List of URLs including 'http://' or 'https://' prefixes and port numbers to REST DAS endpoints. This option is additive with the online-url-list option | +| --data-availability.rest-aggregator.sync-to-storage.check-already-exists | When using a REST aggregator, checks if the data already exists in this DAS's storage. Must be disabled for fast sync with an IPFS backend (default true) | +| --data-availability.rest-aggregator.sync-to-storage.eager | When using a REST aggregator, eagerly syncs batch data to this DAS's storage from the REST endpoints, using the parent chain as the index of batch data hashes; otherwise only syncs lazily | +| --data-availability.rest-aggregator.sync-to-storage.eager-lower-bound-block | When using a REST aggregator that's eagerly syncing, starts indexing forward from this block from the parent chain. Only used if there is no sync state. | +| --data-availability.rest-aggregator.sync-to-storage.retention-period | When using a REST aggregator, period to retain the synced data (defaults to forever) | +| --data-availability.rest-aggregator.sync-to-storage.state-dir | When using a REST aggregator, directory to store the sync state in, i.e. the block number currently synced up to, so that it doesn't sync from scratch each time | + +Finally, for the storage backends you wish to configure, use the following parameters. Toggle between the different options to see all available parameters. + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import S3Parameters from './partials/parameters/_s3-parameters.mdx'; +import LocalBadgerDBParameters from './partials/parameters/_local-badger-db-parameters.mdx'; +import LocalFilesParameters from './partials/parameters/_local-files-parameters.mdx'; +import IPFSParameters from './partials/parameters/_ipfs-parameters.mdx'; + +
+ + + + + + + + + + + + + + +
+ +Here's an example `daserver` command for a DAS that: + +- Enables both interfaces: RPC and REST +- Enables local cache +- Enables a [REST aggregator](#state-synchronization) +- Enables AWS S3 bucket storage +- Enables local Badger database storage + +```bash +daserver + --data-availability.parent-chain-node-url "" + --data-availability.sequencer-inbox-address "
" + --data-availability.key.key-dir /home/user/data/keys + --enable-rpc + --rpc-addr '0.0.0.0' + --log-level 3 + --enable-rest + --rest-addr '0.0.0.0' + --data-availability.local-cache.enable + --data-availability.rest-aggregator.enable + --data-availability.rest-aggregator.online-url-list "" + --data-availability.s3-storage.enable + --data-availability.s3-storage.access-key "" + --data-availability.s3-storage.bucket "" + --data-availability.s3-storage.region "" + --data-availability.s3-storage.secret-key "" + --data-availability.s3-storage.object-prefix "/" + --data-availability.local-db-storage.enable + --data-availability.local-db-storage.data-dir /home/user/data/badgerdb +``` + +And here's an example of how to use a k8s deployment to run that command: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: +name: das-server +spec: +replicas: 1 +selector: + matchLabels: + app: das-server +strategy: + rollingUpdate: + maxSurge: 0 + maxUnavailable: 50% + type: RollingUpdate +template: + metadata: + labels: + app: das-server + spec: + containers: + - command: + - bash + - -c + - | + mkdir -p /home/user/data/badgerdb + /usr/local/bin/daserver --data-availability.parent-chain-node-url "" --data-availability.sequencer-inbox-address "
" --data-availability.key.key-dir /home/user/data/keys --enable-rpc --rpc-addr '0.0.0.0' --log-level 3 --enable-rest --rest-addr '0.0.0.0' --data-availability.local-cache.enable --data-availability.rest-aggregator.enable --data-availability.rest-aggregator.online-url-list "" --data-availability.s3-storage.enable --data-availability.s3-storage.access-key "" --data-availability.s3-storage.bucket "" --data-availability.s3-storage.region "" --data-availability.s3-storage.secret-key "" --data-availability.s3-storage.object-prefix "/" --data-availability.s3-storage.discard-after-timeout false --data-availability.local-db-storage.enable --data-availability.local-db-storage.data-dir /home/user/data/badgerdb --data-availability.local-db-storage.discard-after-timeout false + image: @latestNitroNodeImage@ + imagePullPolicy: Always + resources: + limits: + cpu: "4" + memory: 10Gi + requests: + cpu: "4" + memory: 10Gi + ports: + - containerPort: 9876 + hostPort: 9876 + protocol: TCP + - containerPort: 9877 + hostPort: 9877 + protocol: TCP + volumeMounts: + - mountPath: /home/user/data/ + name: data + readinessProbe: + failureThreshold: 3 + httpGet: + path: /health/ + port: 9877 + scheme: HTTP + initialDelaySeconds: 5 + periodSeconds: 5 + successThreshold: 1 + timeoutSeconds: 1 + volumes: + - name: data + persistentVolumeClaim: + claimName: das-server +``` + +## Archive DA servers + +Archive DA servers are servers that don't discard any data after expiring. Each DAC should have at the very least one archive DAS to ensure all historical data is available. + +To activate the "archive mode" in your DAS, set the parameter `discard-after-timeout` to `false` in your storage method. For example: + +```bash +--data-availability.s3-storage.discard-after-timeout=false +--data-availability.local-db-storage.discard-after-timeout=false +``` + +Note that `local-file-storage` and `ipfs-storage` don't discard data after expiring, so the option `discard-after-timeout` is not available. + +Archive servers should make use of the `--data-availability.rest-aggregator.sync-to-storage` options described above to pull in any data that they don't have. + +## Testing the DAS + +Once the DAS is running, we can test if everything is working correctly using the following methods. + +### Test 1: RPC health check + +The RPC interface enabled in the DAS has a health check for the underlying storage that can be invoked by using the RPC method  `das_healthCheck` that returns `200` if the DAS is active. + +Example: + +```bash +curl -X POST \ + -H 'Content-Type: application/json' \ + -d '{"jsonrpc":"2.0","id":0,"method":"das_healthCheck","params":[]}' \ + +``` + +### Test 2: Store and retrieve data + +The RPC interface of the DAS validates that requests to store data are signed by the sequencer's ECDSA key, identified via a call to the `SequencerInbox` contract on the parent chain. It can also be configured to accept store requests signed with another ECDSA key of your choosing. This could be useful for running load tests, canaries, or troubleshooting your own infrastructure. + +Using this facility, a load test could be constructed by writing a script to store arbitrary amounts of data at an arbitrary rate; a canary could be constructed to store and retrieve data on some interval. + +First, generate an ECDSA keypair with `datool keygen`: + +```bash +datool keygen --dir /dir-to-store-the-key-pair/ --ecdsa +``` + +Then add the following configuration parameter to `daserver`: + +```bash +--data-availability.extra-signature-checking-public-key "/dir-to-store-the-key-pair/ecdsa.pub" + +OR + +--data-availability.extra-signature-checking-public-key "0x" +``` + +Now you can use the `datool` utility to send store requests signed with the ECDSA private key: + +```bash +datool client rpc store --url http://localhost:9876 --message "Hello world" --signing-key "/dir-to-store-the-key-pair/ecdsa" + +OR + +datool client rpc store --url http://localhost:9876 --message "Hello world" --signing-key "0x" + +``` + +The above command will output the `Hex Encoded Data Hash` which can then be used to retrieve the data (notice that you must have the REST interface enabled in the DAS): + +```bash +datool client rest getbyhash --url http://localhost:9877 --data-hash <0xDataHash> +``` + +After running that command, the result should be: `Message: Hello world` + +The retention period defaults to 24 hours, but can be configured when calling `datool client rpc store` with the parameter `--das-retention-period` and the number of milliseconds for the retention period. + +### Test 3: REST health check + +The REST interface has a health check on the path `/health` which will return `200` if the underlying storage is working, otherwise `503`. + +Example: + +```bash +curl -I +``` + +## Running a mirror DAS + +To avoid exposing the REST interface of your main DAS to the public in order to prevent spamming attacks (as explained in [Security considerations](#security-considerations)), you can choose to run a mirror DAS to complement your setup. The mirror DAS will handle all public REST requests, while reading information from the main DAS via its (now private) REST interface. + +In general, mirror DA servers serve two main purposes: + +1. Prevent the main DAS from having to serve requests for data, allowing it to focus only on storing the data received. +2. Provide resiliency to the network in the case of a DAS going down. + +Find information about how to setup a mirror DAS in [How to deploy a mirror DAS](./deploy-a-mirror-das.mdx). + +## Security considerations + +Keep in mind the following information when running the DAS. + +A DAS should strive not to miss any batch of information sent by the sequencer. Although it can use a REST aggregator to fetch missing information from other DA servers, it should aim to synchronize all received information directly. To facilitate this, avoid placing any load balancing layer before the DAS, enabling it to handle all incoming traffic. + +Taking that into account, there's a risk of Denial of Service attacks on those servers if the endpoint for the RPC interface is publicly known. To mitigate this risk, ensure the RPC endpoint's URL is not easily discoverable. It should be known only to the sequencer. Share this information with the chain owner through a private channel to maintain security. + +Finally, as explained in the previous section, if you're also running a mirror DAS, there's no need to publicly expose the REST interface of your main DAS. Your mirrors can synchronize over your private network using the REST interface from your main DAS and other public mirrors. + +## What to do next? + +Once the DAS is deployed and tested, you'll have to communicate the following information to the chain owner, so they can update the chain parameters and configure the sequencer: + +- Public key +- The https URL for the RPC endpoint which includes some random string (e.g. das.your-chain.io/rpc/randomstring123), communicated through a secure channel +- The https URL for the REST endpoint (e.g. das.your-chain.io/rest) + +import DASOptionalParameters from './partials/_das-optional-parameters.mdx'; +import DASMetrics from './partials/_das-metrics.mdx'; + +## Optional parameters + + + +## Metrics + + diff --git a/arbitrum-docs/node-running/how-tos/data-availability-committee/deploy-a-mirror-das.mdx b/arbitrum-docs/node-running/how-tos/data-availability-committee/deploy-a-mirror-das.mdx new file mode 100644 index 000000000..9655bd0d2 --- /dev/null +++ b/arbitrum-docs/node-running/how-tos/data-availability-committee/deploy-a-mirror-das.mdx @@ -0,0 +1,284 @@ +--- +title: 'How to deploy a mirror Data Availability Server (DAS)' +description: This how-to will help you deploy a mirror Data Availability Server (DAS) +author: jose-franco +sidebar_label: Deploy a mirror Data Availability Server (DAS) +sidebar_position: 3 +content-type: how-to +--- + +import PublicPreviewBannerPartial from '../../../partials/_public-preview-banner-partial.md'; + + + +:::caution Running a regular DAS vs running a mirror DAS + +The main use-case for running a mirror DAS is to complement your setup as a Data Availability Committee (DAC) member. That means that you should run your main DAS first, and then configure the mirror DAS. Refer to _[How to deploy a DAS](./deploy-a-das.mdx)_ if needed. + +::: + +

+ AnyTrust chains rely on an external Data + Availability Committee (DAC) to store data and provide it on-demand instead of using its{' '} + parent chain as the Data Availability (DA) layer. The + members of the DAC run a Data Availability Server (DAS) to handle these operations. +

+ +In this how-to, you'll learn how to configure a mirror DAS that serves `GET` requests for stored batches of information through a REST HTTP interface. For a refresher on DACs, refer to the _[Introduction](./introduction.mdx)_. + +This how-to assumes that you're familiar with: + +- How a regular DAS works and what configuration options are available. Refer to _[How to deploy a DAS](./deploy-a-das.mdx)_ for a refresher. +- [Kubernetes](https://kubernetes.io/). The examples in this guide use Kubernetes to containerize your DAS. + +## What is a mirror DAS? + +To avoid exposing the REST interface of your main DAS to the public in order to prevent spamming attacks (as explained in [How to deploy a DAS](./deploy-a-das.mdx#security-considerations)), you can choose to run a mirror DAS to complement your setup. The mirror DAS will handle all public REST requests, while reading information from the main DAS via its (now private) REST interface. + +In general, mirror DA servers serve two main purposes: + +1. Prevent the main DAS from having to serve requests for data, allowing it to focus only on storing the data received. +2. Provide resiliency to the network in the case of a DAS going down. + +## Configuration options + +A mirror DAS will use the same tool and, thus, the same configuration options as your main DAS. You can find an explanation of those options in [How to deploy a DAS](./deploy-a-das.mdx#configuration-options). + +## How to deploy a mirror DAS + +### Step 0: Prerequisites + +Gather the following information: + +- The latest Nitro docker image: `@latestNitroNodeImage@` +- An RPC endpoint for the parent chain. It is recommended to use a [third-party provider RPC](/node-running/node-providers.mdx#third-party-rpc-providers) or [run your own node](/node-running/how-tos/running-an-orbit-node.mdx) to prevent being rate limited. +- The SequencerInbox contract address in the parent chain. +- URL of the list of REST endpoints of other DA servers to configure the REST aggregator. + + + +### Step 1: Set up a persistent volume + +First, we'll set up a volume to store the DAS database. In k8s, we can use a configuration like this: + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: das-mirror +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 200Gi + storageClassName: gp2 +``` + +### Step 2: Deploy the mirror DAS + +To run the mirror DAS, we'll use the `daserver` tool and we'll configure the following parameters: + +| Parameter | Description | +| --------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| --data-availability.parent-chain-node-url | RPC endpoint of a parent chain node | +| --data-availability.sequencer-inbox-address | Address of the SequencerInbox in the parent chain | +| --enable-rest | Enables the REST server listening on --rest-addr and --rest-port | +| --rest-addr | REST server listening interface (default "localhost") | +| --rest-port | (Optional) REST server listening port (default 9877) | +| --log-level | Log level: 1 - ERROR, 2 - WARN, 3 - INFO, 4 - DEBUG, 5 - TRACE (default 3) | +| --data-availability.rest-aggregator.enable | Enables retrieval of sequencer batch data from a list of remote REST endpoints | +| --data-availability.rest-aggregator.online-url-list | A URL to a list of URLs of REST DAS endpoints that is checked at startup. This option is additive with the urls option | +| --data-availability.rest-aggregator.urls | List of URLs including 'http://' or 'https://' prefixes and port numbers to REST DAS endpoints. This option is additive with the online-url-list option | +| --data-availability.rest-aggregator.sync-to-storage.check-already-exists | When using a REST aggregator, checks if the data already exists in this DAS's storage. Must be disabled for fast sync with an IPFS backend (default true) | +| --data-availability.rest-aggregator.sync-to-storage.eager | When using a REST aggregator, eagerly syncs batch data to this DAS's storage from the REST endpoints, using the parent chain as the index of batch data hashes; otherwise only syncs lazily | +| --data-availability.rest-aggregator.sync-to-storage.eager-lower-bound-block | When using a REST aggregator that's eagerly syncing, starts indexing forward from this block from the parent chain. Only used if there is no sync state. | +| --data-availability.rest-aggregator.sync-to-storage.retention-period | When using a REST aggregator, period to retain the synced data (defaults to forever) | +| --data-availability.rest-aggregator.sync-to-storage.state-dir | When using a REST aggregator, directory to store the sync state in, i.e. the block number currently synced up to, so that it doesn't sync from scratch each time | + +To enable caching, you can use the following parameters: + +| Parameter | Description | +| ------------------------------------------ | ----------------------------------------------------------------------- | +| --data-availability.local-cache.enable | Enables local in-memory caching of sequencer batch data | +| --data-availability.local-cache.expiration | Expiration time for in-memory cached sequencer batches (default 1h0m0s) | + +Finally, for the storage backends you wish to configure, use the following parameters. Toggle between the different options to see all available parameters. + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import S3Parameters from './partials/parameters/_s3-parameters.mdx'; +import LocalBadgerDBParameters from './partials/parameters/_local-badger-db-parameters.mdx'; +import LocalFilesParameters from './partials/parameters/_local-files-parameters.mdx'; +import IPFSParameters from './partials/parameters/_ipfs-parameters.mdx'; + +
+ + + + + + + + + + + + + + +
+ +Here's an example `daserver` command for a mirror DAS that: + +- Enables local cache +- Enables AWS S3 bucket storage that doesn't discard data after expiring ([archive](#archive-da-servers)) +- Enables local Badger database storage that doesn't discard data after expiring ([archive](#archive-da-servers)) +- Uses a local main DAS as part of the REST aggregator + +```bash +daserver + --data-availability.parent-chain-node-url "" + --data-availability.sequencer-inbox-address "
" + --enable-rest + --rest-addr '0.0.0.0' + --log-level 3 + --data-availability.local-cache.enable + --data-availability.rest-aggregator.enable + --data-availability.rest-aggregator.urls "http://your-main-das.svc.cluster.local:9877" + --data-availability.rest-aggregator.online-url-list "" + --data-availability.rest-aggregator.sync-to-storage.eager + --data-availability.rest-aggregator.sync-to-storage.eager-lower-bound-block "BLOCK NUMBER" + --data-availability.rest-aggregator.sync-to-storage.state-dir /home/user/data/syncState + --data-availability.s3-storage.enable + --data-availability.s3-storage.access-key "" + --data-availability.s3-storage.bucket "" + --data-availability.s3-storage.region "" + --data-availability.s3-storage.secret-key "" + --data-availability.s3-storage.object-prefix "/" + --data-availability.s3-storage.discard-after-timeout false + --data-availability.local-db-storage.enable + --data-availability.local-db-storage.data-dir /home/user/data/badgerdb + --data-availability.local-db-storage.discard-after-timeout false +``` + +And here's an example of how to use a k8s deployment to run that command: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: das-mirror +spec: + replicas: 1 + selector: + matchLabels: + app: das-mirror + strategy: + rollingUpdate: + maxSurge: 0 + maxUnavailable: 50% + type: RollingUpdate + template: + metadata: + labels: + app: das-mirror + spec: + containers: + - command: + - bash + - -c + - | + mkdir -p /home/user/data/badgerdb + mkdir -p /home/user/data/syncState + /usr/local/bin/daserver --data-availability.parent-chain-node-url "" --data-availability.sequencer-inbox-address "
" --enable-rest --rest-addr '0.0.0.0' --log-level 3 --data-availability.local-cache.enable --data-availability.rest-aggregator.enable --data-availability.rest-aggregator.urls "http://your-main-das.svc.cluster.local:9877" --data-availability.rest-aggregator.online-url-list "" --data-availability.rest-aggregator.sync-to-storage.eager --data-availability.rest-aggregator.sync-to-storage.eager-lower-bound-block "BLOCK NUMBER" --data-availability.rest-aggregator.sync-to-storage.state-dir /home/user/data/syncState --data-availability.s3-storage.enable --data-availability.s3-storage.access-key "" --data-availability.s3-storage.bucket "" --data-availability.s3-storage.region "" --data-availability.s3-storage.secret-key "" --data-availability.s3-storage.object-prefix "/" --data-availability.local-db-storage.enable --data-availability.local-db-storage.data-dir /home/user/data/badgerdb + image: @latestNitroNodeImage@ + imagePullPolicy: Always + resources: + limits: + cpu: "4" + memory: 10Gi + requests: + cpu: "4" + memory: 10Gi + ports: + - containerPort: 9877 + hostPort: 9877 + protocol: TCP + volumeMounts: + - mountPath: /home/user/data/ + name: data + readinessProbe: + failureThreshold: 3 + httpGet: + path: /health/ + port: 9877 + scheme: HTTP + initialDelaySeconds: 5 + periodSeconds: 5 + successThreshold: 1 + timeoutSeconds: 1 + volumes: + - name: data + persistentVolumeClaim: + claimName: das-mirror +``` + +## Archive DA servers + +Archive DA servers are servers that don't discard any data after expiring. Each DAC should have at the very least one archive DAS to ensure all historical data is available. + +To activate the "archive mode" in your DAS, set the parameter `discard-after-timeout` to `false` in your storage method. For example: + +```bash +--data-availability.s3-storage.discard-after-timeout=false +--data-availability.local-db-storage.discard-after-timeout=false +``` + +Note that `local-file-storage` and `ipfs-storage` don't discard data after expiring, so the option `discard-after-timeout` is not available. + +Archive servers should make use of the `--data-availability.rest-aggregator.sync-to-storage` options described above to pull in any data that they don't have. + +## Testing the DAS + +Once the DAS is running, we can test if everything is working correctly using the following methods. + +### Test 1: REST health check + +The REST interface enabled in the mirror DAS has a health check on the path `/health` which will return `200` if the underlying storage is working, otherwise `503`. + +Example: + +```bash +curl -I +``` + +## Security considerations + +Keep in mind the following information when running the mirror DAS. + +For a mirror DAS, using a load balancer is recommended to manage incoming traffic effectively. Additionally, as the REST interface is cacheable, consider deploying a Content Delivery Network (CDN) or caching proxy in front of your REST endpoint. The URL for the REST interface will be publicly known; ensure that it is sufficiently distinct from the RPC endpoint to prevent the latter from being easily discovered. + +## What to do next? + +Once the DAS is deployed and tested, you'll have to communicate the following information to the chain owner, so they can update the chain parameters and configure the sequencer: + +- The https URL for the REST endpoint (e.g. das.your-chain.io/rest) + +import DASOptionalParameters from './partials/_das-optional-parameters.mdx'; +import DASMetrics from './partials/_das-metrics.mdx'; + +## Optional parameters + + + +## Metrics + + diff --git a/arbitrum-docs/node-running/how-tos/data-availability-committee/introduction.mdx b/arbitrum-docs/node-running/how-tos/data-availability-committee/introduction.mdx new file mode 100644 index 000000000..29afa60bd --- /dev/null +++ b/arbitrum-docs/node-running/how-tos/data-availability-committee/introduction.mdx @@ -0,0 +1,56 @@ +--- +title: 'How to configure a Data Availability Committee: Introduction' +description: Learn what's needed to configure a Data Availability Committee for your chain +author: jose-franco +sidebar_label: 'Introduction' +sidebar_position: 1 +content-type: overview +--- + +import PublicPreviewBannerPartial from '../../../partials/_public-preview-banner-partial.md'; + + + +

+ AnyTrust chains rely on an external Data + Availability Committee (DAC) to store data and provide it on-demand instead of using its{' '} + parent chain as the Data Availability (DA) layer. The + members of the DAC run a Data Availability Server (DAS) to handle these operations. +

+ +This section offers information and a series of how-to guides to help you along the process of setting up a Data Availability Committee. These guides target two audiences: Committee members who wish to deploy a Data Availability Server, and chain owners who wish to configure their chain with the information of the Committee. + +Before following the guides in this section, you should be familiarized with how the AnyTrust protocol works, and the role of the DAC in the protocol. Refer to _[Inside AnyTrust](/inside-anytrust.mdx)_ to learn more. + +## If you are a DAC member + +Committee members will need to run a DAS. To do that, they will first need to generate a pair of keys and deploy a DAS. They may also choose to deploy an additional mirror DAS. Find more information in [How to deploy a DAS](./deploy-a-das.mdx) and [How to deploy a mirror DAS](./deploy-a-mirror-das.mdx). + +Here's a basic checklist of actions to complete for DAC members: + +- [Deploy a DAS](./deploy-a-das.mdx). Send the following information to the chain owner: + - Public BLS key + - The https URL for the RPC endpoint which includes some random string (e.g. das.your-chain.io/rpc/randomstring123), communicated through a secure channel + - The https URL for the REST endpoint (e.g. das.your-chain.io/rest) +- [Deploy a mirror DAS](./deploy-a-mirror-das.mdx) if you want to complement your setup with a mirror DAS. Send the following information to the chain owner: + - The https URL for the REST endpoint (e.g. das.your-chain.io/rest) + +## If you are a chain owner + +Chain owners will need to gather the information from the committee members to craft the necessary information to update their chain and the batch poster (more information in [How to configure the DAC in your chain](./configure-the-dac-in-your-chain.mdx)). They might also want to test each DAS individually, by following the testing guides available in [How to deploy a DAS](./deploy-a-das.mdx#testing-the-das) and [How to deploy a mirror DAS](./deploy-a-mirror-das.mdx#testing-the-das). + +Here's a basic checklist of actions to complete for chain owners: + +- Gather the following information from every member of the committee: + - Public BLS Key + - URL of the RPC endpoint + - URL(s) of the REST(s) endpoint +- Ensure that at least one DAS is running as an [archive DAS](./deploy-a-das.mdx#archive-da-servers) +- Generate the keyset and keyset hash with all the information from the servers (guide coming soon) +- Craft the new configuration for the batch poster (guide coming soon) +- Craft the new configuration for your chain's nodes (guide coming soon) +- Update the SequencerInbox contract (guide coming soon) + +## Ask for help + +Configuring a DAC might be a complex process. If you need help setting it up, don't hesitate to ask us on [Discord](https://discord.gg/arbitrum). diff --git a/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/_das-metrics.mdx b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/_das-metrics.mdx new file mode 100644 index 000000000..48c66131e --- /dev/null +++ b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/_das-metrics.mdx @@ -0,0 +1,30 @@ +The DAS comes with the option of producing Prometheus metrics. This option can be activated by using the following parameters: + +| Parameter | Description | +| -------------------------------- | -------------------------------------------- | +| --metrics | Enables the metrics server | +| --metrics-server.addr | Metrics server address (default "127.0.0.1") | +| --metrics-server.port | Metrics server port (default 6070) | +| --metrics-server.update-interval | Metrics server update interval (default 3s) | + +When metrics are enabled, several useful metrics are available at the configured port, at path `debug/metrics` or `debug/metrics/prometheus`. + +### RPC metrics + +| Metric | Description | +| ------------------------------------------------------------ | ------------------------------------ | +| arb_das_rpc_store_requests | Count of RPC Store calls | +| arb_das_rpc_store_success | Successful RPC Store calls | +| arb_das_rpc_store_failure | Failed RPC Store calls | +| arb_das_rpc_store_bytes | Bytes retrieved with RPC Store calls | +| arb_das_rpc_store_duration (p50, p75, p95, p99, p999, p9999) | Duration of RPC Store calls (ns) | + +### REST metrics + +| Metric | Description | +| ----------------------------------------------------------------- | ----------------------------------------- | +| arb_das_rest_getbyhash_requests | Count of REST GetByHash calls | +| arb_das_rest_getbyhash_success | Successful REST GetByHash calls | +| arb_das_rest_getbyhash_failure | Failed REST GetByHash calls | +| arb_das_rest_getbyhash_bytes | Bytes retrieved with REST GetByHash calls | +| arb_das_rest_getbyhash_duration (p50, p75, p95, p99, p999, p9999) | Duration of REST GetByHash calls (ns) | diff --git a/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/_das-optional-parameters.mdx b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/_das-optional-parameters.mdx new file mode 100644 index 000000000..5679805dd --- /dev/null +++ b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/_das-optional-parameters.mdx @@ -0,0 +1,6 @@ +Besides the parameters described in this guide, there are some more options that can be useful when running the DAS. For a comprehensive list of configuration parameters, you can run `daserver --help`. + +| Parameter | Description | +| ----------- | -------------------------------------------------------------------------------------------------------------------- | +| --conf.dump | Prints out the current configuration | +| --conf.file | Absolute path to the configuration file inside the volume to use instead of specifying all parameters in the command | diff --git a/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_ipfs-parameters.mdx b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_ipfs-parameters.mdx new file mode 100644 index 000000000..2ff2d05df --- /dev/null +++ b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_ipfs-parameters.mdx @@ -0,0 +1,5 @@ +| Parameter | Description | +| --------------------------------------------- | -------------------------------------------------------------------------------------------------------- | +| --data-availability.ipfs-storage.enable | Enables storage/retrieval of sequencer batch data from IPFS | +| --data-availability.ipfs-storage.profiles | Comma separated list of IPFS profiles to use | +| --data-availability.ipfs-storage.read-timeout | Timeout for IPFS reads, since by default it will wait forever. Treat timeout as not found (default 1m0s) | diff --git a/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_local-badger-db-parameters.mdx b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_local-badger-db-parameters.mdx new file mode 100644 index 000000000..4f28f0332 --- /dev/null +++ b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_local-badger-db-parameters.mdx @@ -0,0 +1,5 @@ +| Parameter | Description | +| ---------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | +| --data-availability.local-db-storage.enable | Enables storage/retrieval of sequencer batch data from a database on the local filesystem | +| --data-availability.local-db-storage.data-dir | Absolute path of the directory inside the volume in which to store the database (it must exist) | +| --data-availability.local-db-storage.discard-after-timeout | Whether to discard data after its expiry timeout (setting it to false, activates the “archive” mode) | diff --git a/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_local-files-parameters.mdx b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_local-files-parameters.mdx new file mode 100644 index 000000000..7174f1c75 --- /dev/null +++ b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_local-files-parameters.mdx @@ -0,0 +1,4 @@ +| Parameter | Description | +| ----------------------------------------------- | ------------------------------------------------------------------------------------------- | +| --data-availability.local-file-storage.enable | Enables storage/retrieval of sequencer batch data from a directory of files, one per batch | +| --data-availability.local-file-storage.data-dir | Absolute path of the directory inside the volume in which to store the data (it must exist) | diff --git a/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_s3-parameters.mdx b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_s3-parameters.mdx new file mode 100644 index 000000000..195a01fc8 --- /dev/null +++ b/arbitrum-docs/node-running/how-tos/data-availability-committee/partials/parameters/_s3-parameters.mdx @@ -0,0 +1,9 @@ +| Parameter | Description | +| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | +| --data-availability.s3-storage.enable | Enables storage/retrieval of sequencer batch data from an AWS S3 bucket | +| --data-availability.s3-storage.access-key | S3 access key | +| --data-availability.s3-storage.bucket | S3 bucket | +| --data-availability.s3-storage.region | S3 region | +| --data-availability.s3-storage.secret-key | S3 secret key | +| --data-availability.s3-storage.object-prefix | Prefix to add to S3 objects | +| --data-availability.s3-storage.discard-after-timeout | Whether to discard data after its expiry timeout (setting it to false, activates the “archive” mode) | diff --git a/arbitrum-docs/node-running/how-tos/running-a-daserver.mdx b/arbitrum-docs/node-running/how-tos/running-a-daserver.mdx deleted file mode 100644 index 73a5fb147..000000000 --- a/arbitrum-docs/node-running/how-tos/running-a-daserver.mdx +++ /dev/null @@ -1,537 +0,0 @@ ---- -title: 'How to run a data availability server (DAS)' -description: This how-to will help you run a data availability server. -author: amsanghi -sidebar_label: Run a Data Availability Server -sidebar_position: 5 -content-type: how-to ---- - -## Description - -The Data Availability Server, `daserver`, allows storage and retrieval of transaction data batches for Arbitrum AnyTrust chains. It can be run in two modes: either committee member, or mirror. - -Committee members accept time-limited requests to store data batches from an Arbitrum AnyTrust sequencer, and if they store the data then they return a signed certificate promising to store that data. Committee members and mirrors both respond to requests to retrieve the data batches. - -The data batches are content addressed with a keccak256 tree-based hashing scheme called `dastree`. The hash is part of the Data Availability Certificate placed on the parent chain and that hash is used by the Nitro node to retrieve the data from `daservers`. - -Mirrors exist to replicate and serve the data to provide resiliency to the network in the case of committee members going down, and to make it so committee members don't need to serve requests for the data directly. Mirrors may also provide archived data beyond the limited time that committee members are required to store the data. - -This document gives sample configurations for `daserver` in committee member and mirror mode. - -### Interfaces - -There are two main interfaces, a REST interface supporting only GET operations and intended for public use, and an RPC interface intended for use only by the AnyTrust sequencer. Mirrors listen on the REST interface only and respond to queries on `/get-by-hash/`. The response is always the same for a given hash so it is cacheable; it contains a `cache-control` header specifying the object is immutable and to cache for up to 28 days. The REST interface has a health check on `/health` which will return 200 if the underlying storage is working, otherwise 503. - -Committee members listen on the REST interface and additionally listen on the RPC interface for `das_store` RPC messages from the sequencer. The sequencer signs its requests and the committee member checks the signature. The RPC interface also has a health check that checks the underlying storage that responds requests with RPC method `das_healthCheck`. - -IPFS is an alternative interface serving batch retrieval. A mirror can be configured to sync and pin batches to its local IPFS repository, then act as a node in the IPFS peer-to-peer network. A Nitro node that is configured to use IPFS that is syncing an AnyTrust chain will use the batch hashes from the parent chain to find the batch data on the IPFS peer-to-peer network. Depending on network configuration, that Nitro node may then also act as an IPFS node serving the batch data. - -### Storage - -`daserver` can be configured to use one or more of four storage backends; S3, files on local disk, database on disk, and IPFS. If more than one is selected, store requests must succeed to all of them for it to be considered successful, and retrieve requests only require one to succeed. - -Please give us feedback if there are other storage backends you would like supported. - -### Caching - -An in-memory cache can be enabled to avoid needing to access underlying storage for retrieve requests. - -### Synchronizing state - -`daserver` also has an optional REST aggregator which, in the case that a data batch is not found in cache or storage, queries for that batch from a list other of REST servers, and then stores that batch locally. This is how committee members that miss storing a batch (not all committee members are required by the AnyTrust protocol to report success in order to post the batch's certificate to the parent chain) can automatically repair gaps in data they store, and how mirrors can sync. A public list of REST endpoints is published online, which `daserver` can be configured to download and use, and additional endpoints can be specified in configuration. - -## Image: - -`@latestNitroNodeImage@` - -## Usage of daserver - -Options for both committee members and mirrors: - -``` - # Server options - --enable-rest enable the REST server listening on rest-addr and rest-port - --log-level int log level; 1: ERROR, 2: WARN, 3: INFO, 4: DEBUG, 5: TRACE (default 3) - --rest-addr string REST server listening interface (default "localhost") - --rest-port uint REST server listening port (default 9877) - - # Parent chain options - --data-availability.parent-chain-node-url string URL for parent chain node, only used in standalone daserver; when running as part of a node that node's parent chain configuration is used - --data-availability.sequencer-inbox-address string parent chain address of SequencerInbox contract - - # Storage options - --data-availability.local-db-storage.data-dir string directory in which to store the database - --data-availability.local-db-storage.discard-after-timeout discard data after its expiry timeout - --data-availability.local-db-storage.enable enable storage/retrieval of sequencer batch data from a database on the local filesystem - - --data-availability.local-file-storage.data-dir string local data directory - --data-availability.local-file-storage.enable enable storage/retrieval of sequencer batch data from a directory of files, one per batch - - --data-availability.s3-storage.access-key string S3 access key - --data-availability.s3-storage.bucket string S3 bucket - --data-availability.s3-storage.discard-after-timeout discard data after its expiry timeout - --data-availability.s3-storage.enable enable storage/retrieval of sequencer batch data from an AWS S3 bucket - --data-availability.s3-storage.object-prefix string prefix to add to S3 objects - --data-availability.s3-storage.region string S3 region - --data-availability.s3-storage.secret-key string S3 secret key - - --data-availability.ipfs-storage.enable enable storage/retrieval of sequencer batch data from IPFS - --data-availability.ipfs-storage.profiles string comma separated list of IPFS profiles to use - --data-availability.ipfs-storage.read-timeout duration timeout for IPFS reads, since by default it will wait forever. Treat timeout as not found (default 1m0s) - - # Cache options - --data-availability.local-cache.enable Enable local in-memory caching of sequencer batch data - --data-availability.local-cache.expiration duration Expiration time for in-memory cached sequencer batches (default 1h0m0s) - - # REST fallback options - --data-availability.rest-aggregator.enable enable retrieval of sequencer batch data from a list of remote REST endpoints; if other DAS storage types are enabled, this mode is used as a fallback - --data-availability.rest-aggregator.online-url-list string a URL to a list of URLs of REST das endpoints that is checked at startup; additive with the url option - --data-availability.rest-aggregator.urls strings list of URLs including 'http://' or 'https://' prefixes and port numbers to REST DAS endpoints; additive with the online-url-list option - --data-availability.rest-aggregator.sync-to-storage.check-already-exists check if the data already exists in this DAS's storage. Must be disabled for fast sync with an IPFS backend (default true) - --data-availability.rest-aggregator.sync-to-storage.eager eagerly sync batch data to this DAS's storage from the rest endpoints, using parent chain as the index of batch data hashes; otherwise only sync lazily - --data-availability.rest-aggregator.sync-to-storage.eager-lower-bound-block uint when eagerly syncing, start indexing forward from this block from parent chain. Only used if there is no sync state - --data-availability.rest-aggregator.sync-to-storage.retention-period duration period to retain synced data (defaults to forever) (default 2562047h47m16.854775807s) - --data-availability.rest-aggregator.sync-to-storage.state-dir string directory to store the sync state in, ie the block number currently synced up to, so that we don't sync from scratch each time - -``` - -``` -Options only for committee members: - --enable-rpc enable the HTTP-RPC server listening on rpc-addr and rpc-port - --rpc-addr string HTTP-RPC server listening interface (default "localhost") - --rpc-port uint HTTP-RPC server listening port (default 9876) - - --data-availability.key.key-dir string the directory to read the bls keypair ('das_bls.pub' and 'das_bls') from; if using any of the DAS storage types exactly one of key-dir or priv-key must be specified - --data-availability.key.priv-key string the base64 BLS private key to use for signing DAS certificates; if using any of the DAS storage types exactly one of key-dir or priv-key must be specified -``` - -Options generating/using JSON config: - -``` - --conf.dump print out currently active configuration file - --conf.file strings name of configuration file -``` - -Options for producing Prometheus metrics: - -``` - --metrics enable metrics - --metrics-server.addr string metrics server address (default "127.0.0.1") - --metrics-server.port int metrics server port (default 6070) - --metrics-server.update-interval duration metrics server update interval (default 3s) -``` - -Some options are not shown because they are only used by nodes, or they are experimental/advanced. A complete list of options can be found by running `daserver --help` - -## Sample deployments - -### Sample committee member - -Using `daserver` as a committee member requires: - -- A BLS private key to sign the Data Availability Certificates it returns to clients (the sequencer aka batch poster) requesting to Store data. -- Your parent chain address of the sequencer inbox contract, in order to find the batch poster signing address. -- Your parent chain RPC endpoint to query the sequencer inbox contract. -- A persistent volume to write the stored data to if using one of the local disk modes. -- A S3 bucket, and credentials (secret key, access key) of an IAM user that is able to read and write from it if you are using the S3 mode. - -Once the DAS is set up, the local public key in `das_bls.pub` should be communicated out-of-band to the operator of the chain, along with a protocol (http/https), host, and port of the RPC server that can be reached by the sequencer, so that it can be added to the committee keyset. - -#### Set up persistent volume - -This is the persistent volume for storing the DAS database and BLS keypair. - -``` -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: das-server -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 200Gi - storageClassName: gp2 -``` - -#### Generate key - -The BLS keypair must be generated using the `datool keygen` utility. It can be passed to the `dasever` executable by file or on the command line. - -In this sample deployment we use a k8s deployment to run `datool keygen` to create it as a file on the volume that the DAS will use. After this deployment has run once, the deployment can be torn down and deleted. - -``` -apiVersion: apps/v1 -kind: Deployment -metadata: - name: das-server -spec: - replicas: 1 - selector: - matchLabels: - app: das-server - template: - metadata: - labels: - app: das-server - spec: - containers: - - command: - - bash - - -c - - | - mkdir -p /home/user/data/keys - /usr/local/bin/datool keygen --dir /home/user/data/keys - sleep infinity - image: @latestNitroNodeImage@ - imagePullPolicy: Always - resources: - limits: - cpu: "4" - memory: 10Gi - requests: - cpu: "4" - memory: 10Gi - ports: - - containerPort: 9876 - protocol: TCP - volumeMounts: - - mountPath: /home/user/data/ - name: data - volumes: - - name: data - persistentVolumeClaim: - claimName: das-server -``` - -#### Create committee member DAS deployment - -This deployment sets up a DAS server using the Arbitrum Nova Mainnet. It uses the L1 inbox contract at 0x211e1c4c7f1bf5351ac850ed10fd68cffcf6c21b. For the Arbitrum Nova Mainnet you must specify a Mainnet Ethereum L1 RPC endpoint. - -This configuration sets up two storage types. To disable any of them, remove the --data-availability.(local-db-storage|s3-storage).enable option, and the other options for that storage type can also be removed. If updating an existing deployment from that is using the local files on disk storage type, you should use at least `local-file-storage`. It sets the storage backends to discard the data after timeout. - -``` -apiVersion: apps/v1 -kind: Deployment -metadata: - name: das-server -spec: - replicas: 1 - selector: - matchLabels: - app: das-server - strategy: - rollingUpdate: - maxSurge: 0 - maxUnavailable: 50% - type: RollingUpdate - template: - metadata: - labels: - app: das-server - spec: - containers: - - command: - - bash - - -c - - | - mkdir -p /home/user/data/badgerdb - /usr/local/bin/daserver --data-availability.parent-chain-node-url ---enable-rpc --rpc-addr '0.0.0.0' --enable-rest --rest-addr '0.0.0.0' --log-level 3 --data-availability.local-db-storage.enable --data-availability.local-db-storage.data-dir /home/user/data/badgerdb --data-availability.local-db-storage.discard-after-timeout --data-availability.s3-storage.enable --data-availability.s3-storage.access-key "" --data-availability.s3-storage.bucket --data-availability.s3-storage.region --data-availability.s3-storage.secret-key "" --data-availability.s3-storage.object-prefix "YOUR OBJECT KEY PREFIX/" --data-availability.s3-storage.discard-after-timeout --data-availability.key.key-dir /home/user/data/keys --data-availability.local-cache.enable --data-availability.rest-aggregator.enable --data-availability.rest-aggregator.online-url-list "https://nova.arbitrum.io/das-servers" --data-availability.sequencer-inbox-address '0x211e1c4c7f1bf5351ac850ed10fd68cffcf6c21b' - image: @latestNitroNodeImage@ - imagePullPolicy: Always - resources: - limits: - cpu: "4" - memory: 10Gi - requests: - cpu: "4" - memory: 10Gi - ports: - - containerPort: 9876 - hostPort: 9876 - protocol: TCP - - containerPort: 9877 - hostPort: 9877 - protocol: TCP - volumeMounts: - - mountPath: /home/user/data/ - name: data - readinessProbe: - failureThreshold: 3 - httpGet: - path: /health/ - port: 9877 - scheme: HTTP - initialDelaySeconds: 5 - periodSeconds: 5 - successThreshold: 1 - timeoutSeconds: 1 - volumes: - - name: data - persistentVolumeClaim: - claimName: das-server -``` - -### Sample mirror - -Using `daserver` as a mirror requires: - -- Your parent chain address of the sequencer inbox contract, for syncing all batch data. -- Your parent chain RPC endpoint to query the sequencer inbox contract. -- A persistent volume to write the stored data to if using one of the local disk modes. -- A S3 bucket, and credentials (secret key, access key) of an IAM user that is able to read and write from it if you are using the S3 mode. - -The mirror does not require a BLS key since it will not be accepting store requests from the sequencer. - -Once the mirror is set up, please communicate a URL to reach it to the chain operator so they can add it to the public mirror list. - -#### Set up persistent volume - -This is the persistent volume for storing the DAS database. - -``` -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: das-mirror -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 200Gi - storageClassName: gp2 -``` - -#### Create mirror DAS deployment - -This deployment sets up a DAS server using the Arbitrum Nova Mainnet. It uses the L1 inbox contract at 0x211e1c4c7f1bf5351ac850ed10fd68cffcf6c21b. For the Arbitrum Nova Mainnet you must specify a Mainnet Ethereum L1 RPC endpoint. - -This configuration sets up two storage types. To disable any of them, remove the --data-availability.(local-file-storage|s3-storage).enable option, and the other options for that storage type can also be removed. It sets the storage backends to keep all data forever. - -``` -apiVersion: apps/v1 -kind: Deployment -metadata: - name: das-mirror -spec: - replicas: 1 - selector: - matchLabels: - app: das-mirror - strategy: - rollingUpdate: - maxSurge: 0 - maxUnavailable: 50% - type: RollingUpdate - template: - metadata: - labels: - app: das-mirror - spec: - containers: - - command: - - bash - - -c - - | - mkdir -p /home/user/data/badgerdb - mkdir -p /home/user/data/syncState - /usr/local/bin/daserver --data-availability.parent-chain-node-url ---enable-rest --rest-addr '0.0.0.0' --log-level 3 --data-availability.local-db-storage.enable --data-availability.local-db-storage.data-dir /home/user/data/badgerdb --data-availability.s3-storage.enable --data-availability.s3-storage.access-key "" --data-availability.s3-storage.bucket --data-availability.s3-storage.region --data-availability.s3-storage.secret-key "" --data-availability.s3-storage.object-prefix "YOUR OBJECT KEY PREFIX/" --data-availability.local-cache.enable --data-availability.rest-aggregator.enable --data-availability.rest-aggregator.urls "http://your-committee-member.svc.cluster.local:9877" --data-availability.rest-aggregator.online-url-list "https://nova.arbitrum.io/das-servers" --data-availability.rest-aggregator.sync-to-storage.eager --data-availability.rest-aggregator.sync-to-storage.eager-lower-bound-block 15025611 --data-availability.sequencer-inbox-address '0x211e1c4c7f1bf5351ac850ed10fd68cffcf6c21b' --data-availability.rest-aggregator.sync-to-storage.state-dir /home/user/data/syncState - image: @latestNitroNodeImage@ - imagePullPolicy: Always - resources: - limits: - cpu: "4" - memory: 10Gi - requests: - cpu: "4" - memory: 10Gi - ports: - - containerPort: 9877 - hostPort: 9877 - protocol: TCP - volumeMounts: - - mountPath: /home/user/data/ - name: data - readinessProbe: - failureThreshold: 3 - httpGet: - path: /health/ - port: 9877 - scheme: HTTP - initialDelaySeconds: 5 - periodSeconds: 5 - successThreshold: 1 - timeoutSeconds: 1 - volumes: - - name: data - persistentVolumeClaim: - claimName: das-mirror -``` - -#### Create IPFS mirror DAS deployment - -This deployment sets up a DAS server using the Arbitrum Nova Mainnet. It uses the L1 inbox contract at 0x211e1c4c7f1bf5351ac850ed10fd68cffcf6c21b. For the Arbitrum Nova Mainnet you must specify a Mainnet Ethereum L1 RPC endpoint. - -This configuration sets up the `daserver` as an IPFS node. Port 4001 for the server should be exposed to the internet for IPFS p2p communication to work. - -If this is the first IPFS mirror set up, it will take a very long time to sync due to trying to find the data in IPFS first. Add this configuration option to skip this step and sync from the REST endpoints: `--data-availability.rest-aggregator.sync-to-storage.check-already-exists=false` - -``` -apiVersion: apps/v1 -kind: Deployment -metadata: - name: das-mirror -spec: - replicas: 1 - selector: - matchLabels: - app: das-mirror - strategy: - rollingUpdate: - maxSurge: 0 - maxUnavailable: 50% - type: RollingUpdate - template: - metadata: - labels: - app: das-mirror - spec: - containers: - - command: - - bash - - -c - - | - mkdir -p /home/user/data/ipfsRepo - mkdir -p /home/user/data/syncState - /usr/local/bin/daserver --data-availability.parent-chain-node-url --enable-rest --rest-addr '0.0.0.0' --log-level 3 --data-availability.ipfs-storage.enable --data-availability.ipfs-storage.repo-dir /home/user/data/ipfsRepo --data-availability.rest-aggregator.enable --data-availability.rest-aggregator.urls "http://your-committee-member.svc.cluster.local:9877" --data-availability.rest-aggregator.online-url-list "https://nova.arbitrum.io/das-servers" --data-availability.rest-aggregator.sync-to-storage.eager --data-availability.rest-aggregator.sync-to-storage.eager-lower-bound-block 15025611 --data-availability.sequencer-inbox-address '0x211e1c4c7f1bf5351ac850ed10fd68cffcf6c21b' --data-availability.rest-aggregator.sync-to-storage.state-dir /home/user/data/syncState - image: @latestNitroNodeImage@ - imagePullPolicy: Always - resources: - limits: - cpu: "4" - memory: 10Gi - requests: - cpu: "4" - memory: 10Gi - ports: - - containerPort: 9877 - hostPort: 9877 - protocol: TCP - - containerPort: 4001 - hostPort: 4001 - protocol: TCP - volumeMounts: - - mountPath: /home/user/data/ - name: data - readinessProbe: - failureThreshold: 3 - httpGet: - path: /health/ - port: 9877 - scheme: HTTP - initialDelaySeconds: 5 - periodSeconds: 5 - successThreshold: 1 - timeoutSeconds: 1 - volumes: - - name: data - persistentVolumeClaim: - claimName: das-mirror -``` - -### Testing - -#### Basic validation: Health check data is present - -In the docker image there is the `datool` utility that can be used to Store and Retrieve messages from a DAS. We will take advantage of a data hash that will always be present if the health check is enabled. - -From the pod: - -``` -$ /usr/local/bin/datool client rest getbyhash --url http://localhost:9877 --data-hash 0x8b248e2bd8f75bf1334fe7f0da75cc7c1a34e00e00a22a96b7a43d580d250f3d -Message: Test-Data -``` - -If you do not have the health check configured yet, you can trigger one manually as follows: - -``` -$ curl http://localhost:9877/health -``` - -Using curl to check the REST endpoint - -``` -$ curl https:////get-by-hash/8b248e2bd8f75bf1334fe7f0da75cc7c1a34e00e00a22a96b7a43d580d250f3d -{"data":"VGVzdC1EYXRh"} -``` - -#### Further validation: Using store interface directly - -The Store interface of `daserver` validates that requests to store data are signed by the Batch Poster's ECDSA key, identified via a call to the Sequencer Inbox contract on parent chain. It can also be configured to accept Store requests signed with another ECDSA key of your choosing. This could be useful for running load tests, canaries, or troubleshooting your own infrastructure. Using this facility, a load test could be constructed by writing a script to store arbitrary amounts of data at an arbitrary rate; a canary could be constructed to store and retrieve data on some interval. - -Generate an ECDSA keypair: - -``` -$ /usr/local/bin/datool keygen --dir /dir-of-your-choice/ --ecdsa -``` - -Then add the following configuration option to `daserver`: - -``` ---data-availability.extra-signature-checking-public-key /dir-of-your-choice/ecdsa.pub - -OR - ---data-availability.extra-signature-checking-public-key 0x -``` - -Now you can use the `datool` utility to send Store requests signed with the ecdsa private key: - -``` -$ /usr/local/bin/datool rpc store --url http://localhost:9876 --message "Hello world" --signing-key /tmp/ecdsatest/ecdsa - -OR - -$ /usr/local/bin/datool client rpc store --url http://localhost:9876 --message "Hello world" --signing-key "0x" -``` - -The above command outputs the `Hex Encoded Data Hash: ` which can be used to retrieve the data: - -``` -$ /usr/local/bin/datool client rest getbyhash --url http://localhost:9877 --data-hash 0x052cca0e379137c975c966bcc69ac8237ac38dc1fcf21ac9a6524c87a2aab423 -Message: Hello world -``` - -The retention period defaults to 24h but can be configured for `datool client rpc store` with the option: - -``` ---das-retention-period -``` - -### Deployment recommendations - -The REST interface is cacheable, consider using a CDN or caching proxy in front of your REST endpoint. - -If you are running a mirror, the REST interface on your committee member does not have to be exposed publicly. Your mirrors can sync on your private network from the REST interface of your committee member and other public mirrors. - -### Metrics - -If metrics are enabled in configuration, then several useful metrics are available at the configured port (default 6070), at path `debug/metrics` or `debug/metrics/prometheus`. - -| Metric | Description | -| ----------------------------------------------------------------- | ----------------------------------------- | -| arb_das_rest_getbyhash_requests | Count of REST GetByHash calls | -| arb_das_rest_getbyhash_success | Successful REST GetByHash calls | -| arb_das_rest_getbyhash_failure | Failed REST GetByHash calls | -| arb_das_rest_getbyhash_bytes | Bytes retrieved with REST GetByHash calls | -| arb_das_rest_getbyhash_duration (p50, p75, p95, p99, p999, p9999) | Duration of REST GetByHash calls (ns) | -| arb_das_rpc_store_requests | Count of RPC Store calls | -| arb_das_rpc_store_success | Successful RPC Store calls | -| arb_das_rpc_store_failure | Failed RPC Store calls | -| arb_das_rpc_store_bytes | Bytes retrieved with RPC Store calls | -| arb_das_rpc_store_duration (p50, p75, p95, p99, p999, p9999) | Duration of RPC Store calls (ns) | diff --git a/vercel.json b/vercel.json index 4e61a4f8d..d155e2e61 100644 --- a/vercel.json +++ b/vercel.json @@ -397,7 +397,12 @@ }, { "source": "/das/daserver-instructions", - "destination": "/node-running/how-tos/running-a-daserver", + "destination": "/node-running/how-tos/data-availability-committee/introduction", + "permanent": false + }, + { + "source": "/node-running/how-tos/running-a-daserver", + "destination": "/node-running/how-tos/data-availability-committee/introduction", "permanent": false } ] diff --git a/website/src/css/partials/_dynamic-content-tabs.scss b/website/src/css/partials/_dynamic-content-tabs.scss index 5c68b7762..086cce5f3 100644 --- a/website/src/css/partials/_dynamic-content-tabs.scss +++ b/website/src/css/partials/_dynamic-content-tabs.scss @@ -16,7 +16,7 @@ position: relative; margin-bottom: 25px; - .tabgroup-with-label { + .tabgroup { font-size: 13px; font-weight: 500; border: 1px solid transparent; @@ -35,24 +35,10 @@ position: relative; &:first-child { - cursor: default !important; - color: black; - width: 150px; - border: 1px solid transparent !important; - border-right: 0px !important; - - &:hover { - background-color: transparent !important; - cursor: default !important; - } - } - - &:nth-child(2) { border-top-left-radius: 5px !important; border-bottom-left-radius: 5px !important; border-left: 1px solid var(--tab-gray); border-right: 0px !important; - z-index: 2; } &:last-child { @@ -96,4 +82,31 @@ } } } + + .tabgroup-with-label { + @extend .tabgroup; + + .tabs__item { + &:first-child { + cursor: default !important; + color: black; + width: 150px; + border: 1px solid transparent !important; + border-right: 0px !important; + + &:hover { + background-color: transparent !important; + cursor: default !important; + } + } + + &:nth-child(2) { + border-top-left-radius: 5px !important; + border-bottom-left-radius: 5px !important; + border-left: 1px solid var(--tab-gray); + border-right: 0px !important; + z-index: 2; + } + } + } }