diff --git a/pages/404.mdx b/pages/404.mdx index 0726bb9a0..31f68cd2c 100644 --- a/pages/404.mdx +++ b/pages/404.mdx @@ -10,5 +10,5 @@ description: 404 page not found and directs users to submit a GitHub issue. ## Let's find our way back.
Visit the [homepage](index) to get started. -#### Please help by [submitting an issue](https://github.com/ethereum-optimism/docs/issues/new/choose) for the broken link. ❤️ +#### Please help by [submitting an issue](https://github.com/metalpay/metal-l2-docs/issues/new/choose) for the broken link. ❤️ diff --git a/pages/500.mdx b/pages/500.mdx index 0971e948c..6eb5107ea 100644 --- a/pages/500.mdx +++ b/pages/500.mdx @@ -6,8 +6,6 @@ description: 500 internal server error and directs users to submit a git issue. # Unexpected Error -![500 Error Warning.](/img/icons/500-page.svg) - ## Something isn't quite right. Let's start again on the [homepage](index). -#### Please help by [submitting an issue](https://github.com/ethereum-optimism/docs/issues/new/choose) about what led you to this page. ❤️ +#### Please help by [submitting an issue](https://github.com/metalpay/metal-l2-docs/issues/new/choose) about what led you to this page. ❤️ diff --git a/pages/builders/_meta.json b/pages/builders/_meta.json index 17ea9b121..de19146c7 100644 --- a/pages/builders/_meta.json +++ b/pages/builders/_meta.json @@ -1,7 +1,6 @@ { "notices": "Notices (README)", "app-developers": "App developers", - "chain-operators": "Chain operators", "node-operators": "Node operators", "tools": "Developer tools" } diff --git a/pages/builders/app-developers.mdx b/pages/builders/app-developers.mdx index c5dc109d2..9ba7c0986 100644 --- a/pages/builders/app-developers.mdx +++ b/pages/builders/app-developers.mdx @@ -1,6 +1,6 @@ --- title: App Developers -description: If you're a developer looking to build on OP Stack, you've come to the right place. In this area of the Optimism Docs you'll find everything you ... +description: If you're a developer looking to build on OP Stack, you've come to the right place. In this area of the Metal L2 Docs you'll find everything you ... lang: en-US --- @@ -8,7 +8,7 @@ import { Card, Cards } from 'nextra/components' # App Developers -If you're a developer looking to build on OP Mainnet, you've come to the right place. In this area of the Optimism Docs you'll find everything you ... +If you're a developer looking to build on Metal L2, you've come to the right place. In this area of the Metal L2 Docs you'll find what you need to build the next great blockchain app. diff --git a/pages/builders/app-developers/bridging.mdx b/pages/builders/app-developers/bridging.mdx index 893cb6db6..f8ecc8210 100644 --- a/pages/builders/app-developers/bridging.mdx +++ b/pages/builders/app-developers/bridging.mdx @@ -8,7 +8,7 @@ import { Card, Cards } from 'nextra/components' # Bridging -This section provides information on bridging basics, custom bridges, sending data between l1 and l2 and using the standard bridge. You'll find guide, overview to help you understand and work with these topics. +This section provides information on bridging basics, custom bridges, sending data between l1 and l2 and using the standard bridge. These are guides to help you understand and work with these topics. diff --git a/pages/builders/app-developers/overview.mdx b/pages/builders/app-developers/overview.mdx index 3ee3a544c..4055e8441 100644 --- a/pages/builders/app-developers/overview.mdx +++ b/pages/builders/app-developers/overview.mdx @@ -9,7 +9,7 @@ import { Cards, Card } from 'nextra/components' # App developer overview If you're a developer looking to build on Metal L2, you've come to the right place. -In this area of the Metal L2 Docs you'll find everything you need to know about building Metal L2 applications for deployment on the Superchains Banking Layer, Metal L2. +In this area of the Metal L2 Docs you'll find everything you need to know about building Metal L2 applications for deployment on the Superchain Banking Layer, Metal L2. ## Getting started diff --git a/pages/builders/chain-operators.mdx b/pages/builders/chain-operators.mdx deleted file mode 100644 index 73c0e4bb9..000000000 --- a/pages/builders/chain-operators.mdx +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: Chain Operators -description: Documentation covering Architecture, Configuration, Deploy, Features, Hacks, Management, Self Hosted, Tools, Tutorials in the Chain Operators section of the OP Stack ecosystem. -lang: en-US ---- - -import { Card, Cards } from 'nextra/components' - -# Chain Operators - -Documentation covering Architecture, Configuration, Deploy, Features, Hacks, Management, Self Hosted, Tools, Tutorials in the Chain Operators section of the OP Stack ecosystem. - - - - - - - - - - - - - - - - - - - - diff --git a/pages/builders/chain-operators/_meta.json b/pages/builders/chain-operators/_meta.json deleted file mode 100644 index e26e786c5..000000000 --- a/pages/builders/chain-operators/_meta.json +++ /dev/null @@ -1,11 +0,0 @@ -{ - "architecture": "Architecture", - "self-hosted": "Start a self-hosted chain", - "configuration": "Chain configuration", - "management": "Chain management", - "features": "Chain features", - "deploy": "Deployment", - "tutorials": "Tutorials", - "tools": "Chain tools", - "hacks": "OP Stack hacks" -} diff --git a/pages/builders/chain-operators/architecture.mdx b/pages/builders/chain-operators/architecture.mdx deleted file mode 100644 index 7318cfc12..000000000 --- a/pages/builders/chain-operators/architecture.mdx +++ /dev/null @@ -1,104 +0,0 @@ ---- -title: Chain architecture -lang: en-US -description: Learn about the OP chain architecture. ---- - -import Image from 'next/image' -import { Callout } from 'nextra/components' -import {OpProposerDescriptionShort} from '@/content/index.js' - -# Chain architecture - -This page contains information about the components of the rollup protocol and -how they work together to build the layer 2 blockchain from the Chain Operator's -perspective. The OP Stack is built in such a way that it is as similar to -Ethereum as possible. Like Ethereum, the OP Stack has execution and consensus -clients. The OP Stack also has some privileged roles that produce L2 blocks. -If you want a more detailed view of the OP Stack protocol, check out the -[OP Stack section](/stack/getting-started) of our documentation. - -## Permissioned components - -These clients and services work together to enable the block production on the L2 network. -Sequencer nodes (`op-geth` + `op-node`) gather proposed transactions from users. -The batcher submits batch data to L1 which controls the safe blocks and ultimately -controls the canonical chain. The proposer submits output roots to L1 which control -L2 to L1 messaging. - -Sequencer Component Diagram - -### op-geth - -`op-geth` implements the layer 2 execution layer, with [minimal changes](https://op-geth.optimism.io/) -for a secure Ethereum-equivalent application environment. - -### op-node - -`op-node` implements most rollup-specific functionality as the layer 2 -consensus layer, similar to a layer 1 beacon-node. The `op-node` is stateless and gets -its view of the world from `op-geth`. - -### op-batcher - -`op-batcher` is the service that submits the L2 Sequencer data to L1, to make it available -for verifiers. To reduce the cost of writing to the L1, it only posts the minimal -amount of data required to reproduce L2 blocks. - -### op-proposer - - - -## Ingress traffic - -It is important to setup a robust chain architecture to handle large volumes of RPC -requests from your users. The Sequencer node has the important job of working with -the batcher to handle block creation. To allow the Sequencer to focus on that job, -you can peer replica nodes to handle the rest of the work. - -An example of this would be to configure [proxyd](https://docs.optimism.io/builders/chain-operators/tools/proxyd) -to route RPC methods, retry failed requests, load balance, etc. Users sending -`eth_sendRawTransaction` requests can have their requests forwarded directly to the -Sequencer. All other RPC requests can be forwarded to replica nodes. - -Ingress Traffic Diagram - -### proxyd - -This tool is an RPC request router and proxy. It does the following things: - -1. Whitelists RPC methods. -2. Routes RPC methods to groups of backend services. -3. Automatically retries failed backend requests. -4. Track backend consensus (latest, safe, finalized blocks), peer count and sync state. -5. Re-write requests and responses to enforce consensus. -6. Load balance requests across backend services. -7. Cache immutable responses from backends. -8. Provides metrics to measure request latency, error rates, and the like. - -### Sequencer - -The Sequencer node works with the batcher and proposer to create new blocks. So it should -handle the state changing RPC request `eth_sendRawTransaction`. It can be peered with -replica nodes to gossip new `unsafe` blocks to the rest of the network. - - -To run a rollup, you need a minimum of one archive node. This is required by the proposer as the data that it needs can be older than the data available to a full node. Note that since the proposer doesn't care what archive node it points to, you can technically point it towards an archive node that isn't the sequencer. - - -Sequencer Node Diagram - -### Replica node - -The replica nodes are additional nodes on the network. They can be peered to the Sequencer, -which might not be connected to the rest of the internet, and other replicas. Additional -replicas can help horizontally scale RPC requests. - -Replica Node Diagram - -## Next steps - -* Find out how you can support [snap sync](/builders/chain-operators/management/snap-sync) -on your chain. -* Find out how you can utilize [blob space](/builders/chain-operators/management/blobs) -to reduce the transaction fee cost on your chain. diff --git a/pages/builders/chain-operators/configuration.mdx b/pages/builders/chain-operators/configuration.mdx deleted file mode 100644 index f8ea320f4..000000000 --- a/pages/builders/chain-operators/configuration.mdx +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Configuration -lang: en-US -description: Overview of configuration options for batchers, chain operators, proposers, and rollup deployments. ---- - -import { Card, Cards } from 'nextra/components' - -# Configuration - -This section provides information on batcher configuration, chain operator configurations, proposer configuration, and rollup deployment configuration. Users will find API references and overviews to help understand and work with these topics. - - - - - - - - - - diff --git a/pages/builders/chain-operators/configuration/_meta.json b/pages/builders/chain-operators/configuration/_meta.json deleted file mode 100644 index cfa1aaf62..000000000 --- a/pages/builders/chain-operators/configuration/_meta.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "overview": "Overview", - "rollup": "Rollup deployment configuration", - "batcher": "Batcher configuration", - "proposer": "Proposer configuration" -} \ No newline at end of file diff --git a/pages/builders/chain-operators/configuration/batcher.mdx b/pages/builders/chain-operators/configuration/batcher.mdx deleted file mode 100644 index 3b0dca073..000000000 --- a/pages/builders/chain-operators/configuration/batcher.mdx +++ /dev/null @@ -1,675 +0,0 @@ ---- -title: Batcher configuration -lang: en-US -description: Learn the OP Stack batcher configurations. ---- - -import { Callout, Tabs } from 'nextra/components' - -# Batcher configuration - -This page lists all configuration options for the op-batcher. The op-batcher posts -L2 sequencer data to the L1, to make it available for verifiers. The following -options are from the `--help` in [v1.7.6](https://github.com/ethereum-optimism/optimism/releases/tag/v1.7.6). - -## Global options - -### active-sequencer-check-duration - -The duration between checks to determine the active sequencer endpoint. The -default value is `2m0s`. - - - `--active-sequencer-check-duration=` - `--active-sequencer-check-duration=2m0s` - `OP_BATCHER_ACTIVE_SEQUENCER_CHECK_DURATION=2m0s` - - -### approx-compr-ratio - -The approximate compression ratio (`<=1.0`). Only relevant for ratio -compressor. The default value is `0.6`. - - - `--approx-compr-ratio=` - `--approx-compr-ratio=0.6` - `OP_BATCHER_APPROX_COMPR_RATIO=0.6` - - -### batch-type - -The batch type. 0 for `SingularBatch` and 1 for `SpanBatch`. The default value -is `0` for `SingularBatch`. - - - `--batch-type=` - `--batch-type=singular` - `OP_BATCHER_BATCH_TYPE=` - - -### check-recent-txs-depth - -Indicates how many blocks back the batcher should look during startup for a -recent batch tx on L1. This can speed up waiting for node sync. It should be -set to the verifier confirmation depth of the sequencer (e.g. 4). The default -value is `0`. - - - `--check-recent-txs-depth=` - `--check-recent-txs-depth=0` - `OP_BATCHER_CHECK_RECENT_TXS_DEPTH=0` - - -### compression-algo - -The compression algorithm to use. Valid options: zlib, brotli, brotli-9, -brotli-10, brotli-11. The default value is `zlib`. - - - `--compression-algo=` - `--compression-algo=zlib` - `OP_BATCHER_COMPRESSION_ALGO=zlib` - - -### compressor - -The type of compressor. Valid options: none, ratio, shadow. The default value -is `shadow`. - - - `--compressor=` - `--compressor=shadow` - `OP_BATCHER_COMPRESSOR=shadow` - - -### data-availability-type - -The data availability type to use for submitting batches to the L1. Valid -options: calldata, blobs. The default value is `calldata`. - - - `--data-availability-type=` - `--data-availability-type=calldata` - `OP_BATCHER_DATA_AVAILABILITY_TYPE=calldata` - - -### fee-limit-multiplier - -The multiplier applied to fee suggestions to put a hard limit on fee increases. -The default value is `5`. - - - `--fee-limit-multiplier=` - `--fee-limit-multiplier=5` - `OP_BATCHER_TXMGR_FEE_LIMIT_MULTIPLIER=5` - - -### hd-path - -The HD path used to derive the sequencer wallet from the mnemonic. The mnemonic -flag must also be set. - - - `--hd-path=` - `--hd-path=` - `OP_BATCHER_HD_PATH=` - - -### l1-eth-rpc - -HTTP provider URL for L1. - - - `--l1-eth-rpc=` - `--l1-eth-rpc` - `OP_BATCHER_L1_ETH_RPC=` - - -### l2-eth-rpc - -HTTP provider URL for L2 execution engine. A comma-separated list enables the -active L2 endpoint provider. Such a list needs to match the number of -rollup-rpcs provided. - - - `--l2-eth-rpc=` - `--l2-eth-rpc=` - `OP_BATCHER_L2_ETH_RPC=` - - -### log.color - -Color the log output if in terminal mode. The default value is `false`. - - - `--log.color=` - `--log.color=false` - `OP_BATCHER_LOG_COLOR=false` - - -### log.format - -Format the log output. Supported formats: 'text', 'terminal', 'logfmt', 'json', -'json-pretty'. The default value is `text`. - - - `--log.format=` - `--log.format=text` - `OP_BATCHER_LOG_FORMAT=text` - - -### log.level - -The lowest log level that will be output. The default value is `INFO`. - - - `--log.level=` - `--log.level=INFO` - `OP_BATCHER_LOG_LEVEL=INFO` - - -### max-channel-duration - -The maximum duration of L1-blocks to keep a channel open. 0 to disable. The -default value is `0`. - - - `--max-channel-duration=` - `--max-channel-duration=0` - `OP_BATCHER_MAX_CHANNEL_DURATION=0` - - -### max-l1-tx-size-bytes - -The maximum size of a batch tx submitted to L1. Ignored for blobs, where max -blob size will be used. The default value is `120000`. - - - `--max-l1-tx-size-bytes=` - `--max-l1-tx-size-bytes=120000` - `OP_BATCHER_MAX_L1_TX_SIZE_BYTES=120000` - - -### max-pending-tx - -The maximum number of pending transactions. 0 for no limit. The default value -is `1`. - - - `--max-pending-tx=` - `--max-pending-tx=1` - `OP_BATCHER_MAX_PENDING_TX=1` - - -### metrics.addr - -Metrics listening address. The default value is `0.0.0.0`. - - - `--metrics.addr=` - `--metrics.addr=0.0.0.0` - `OP_BATCHER_METRICS_ADDR=0.0.0.0` - - -### metrics.enabled - -Enable the metrics server. The default value is `false`. - - - `--metrics.enabled=` - `--metrics.enabled=false` - `OP_BATCHER_METRICS_ENABLED=false` - - -### metrics.port - -Metrics listening port. The default value is `7300`. - - - `--metrics.port=` - `--metrics.port=7300` - `OP_BATCHER_METRICS_PORT=7300` - - -### mnemonic - -The mnemonic used to derive the wallets for either the service. - - - `--mnemonic=` - `--mnemonic=` - `OP_BATCHER_MNEMONIC=` - - -### network-timeout - -Timeout for all network operations. The default value is `10s`. - - - `--network-timeout=` - `--network-timeout=10s` - `OP_BATCHER_NETWORK_TIMEOUT=10s` - - -### num-confirmations - -Number of confirmations which we will wait after sending a transaction. The -default value is `10`. - - - `--num-confirmations=` - `--num-confirmations=10` - `OP_BATCHER_NUM_CONFIRMATIONS=10` - - -### plasma.da-server - -HTTP address of a DA Server. - - - `--plasma.da-server=` - `--plasma.da-server=` - `OP_BATCHER_PLASMA_DA_SERVER=` - - -### plasma.da-service - -Use DA service type where commitments are generated by plasma server. The -default value is `false`. - - - `--plasma.da-service=` - `--plasma.da-service=false` - `OP_BATCHER_PLASMA_DA_SERVICE=false` - - -### plasma.enabled - -Enable plasma mode. The default value is `false`. - - - `--plasma.enabled=` - `--plasma.enabled=false` - `OP_BATCHER_PLASMA_ENABLED=false` - - -### plasma.verify-on-read - -Verify input data matches the commitments from the DA storage service. The -default value is `true`. - - - `--plasma.verify-on-read=` - `--plasma.verify-on-read=true` - `OP_BATCHER_PLASMA_VERIFY_ON_READ=true` - - -### poll-interval - -How frequently to poll L2 for new blocks. The default value is `6s`. - - - `--poll-interval=` - `--poll-interval=6s` - `OP_BATCHER_POLL_INTERVAL=6s` - - -### pprof.addr - -pprof listening address. The default value is `0.0.0.0`. - - - `--pprof.addr=` - `--pprof.addr=0.0.0.0` - `OP_BATCHER_PPROF_ADDR=0.0.0.0` - - -### pprof.enabled - -Enable the pprof server. The default value is `false`. - - - `--pprof.enabled=` - `--pprof.enabled=false` - `OP_BATCHER_PPROF_ENABLED=false` - - -### pprof.path - -pprof file path. If it is a directory, the path is `{dir}/{profileType}.prof`. - - - `--pprof.path=` - `--pprof.path=` - `OP_BATCHER_PPROF_PATH=` - - -### pprof.port - -pprof listening port. The default value is `6060`. - - - `--pprof.port=` - `--pprof.port=6060` - `OP_BATCHER_PPROF_PORT=6060` - - -### pprof.type - -pprof profile type. One of cpu, heap, goroutine, threadcreate, block, mutex, -allocs. - - - `--pprof.type=` - `--pprof.type` - `OP_BATCHER_PPROF_TYPE=` - - -### private-key - -The private key to use with the service. Must not be used with mnemonic. - - - `--private-key=` - `--private-key=` - `OP_BATCHER_PRIVATE_KEY=` - - -### resubmission-timeout - -Duration we will wait before resubmitting a transaction to L1. The default -value is `48s`. - - - `--resubmission-timeout=` - `--resubmission-timeout=48s` - `OP_BATCHER_RESUBMISSION_TIMEOUT=48s` - - -### rollup-rpc - -HTTP provider URL for Rollup node. A comma-separated list enables the active L2 -endpoint provider. Such a list needs to match the number of l2-eth-rpcs -provided. - - - `--rollup-rpc=` - `--rollup-rpc=` - `OP_BATCHER_ROLLUP_RPC=` - - -### rpc.addr - -rpc listening address. The default value is `0.0.0.0`. - - - `--rpc.addr=` - `--rpc.addr=0.0.0.0` - `OP_BATCHER_RPC_ADDR=0.0.0.0` - - -### rpc.enable-admin - -Enable the admin API. The default value is `false`. - - - `--rpc.enable-admin=` - `--rpc.enable-admin=false` - `OP_BATCHER_RPC_ENABLE_ADMIN=false` - - -### rpc.port - -rpc listening port. The default value is `8545`. - - - `--rpc.port=` - `--rpc.port=8545` - `OP_BATCHER_RPC_PORT=8545` - - -### safe-abort-nonce-too-low-count - -Number of ErrNonceTooLow observations required to give up on a tx at a -particular nonce without receiving confirmation. The default value is `3`. - - - `--safe-abort-nonce-too-low-count=` - `--safe-abort-nonce-too-low-count=3` - `OP_BATCHER_SAFE_ABORT_NONCE_TOO_LOW_COUNT=3` - - -### sequencer-hd-path - -DEPRECATED: The HD path used to derive the sequencer wallet from the mnemonic. -The mnemonic flag must also be set. - - - `--sequencer-hd-path=` - `--sequencer-hd-path` - `OP_BATCHER_SEQUENCER_HD_PATH=` - - -### signer.address - -Address the signer is signing transactions for. - - - `--signer.address=` - `--signer.address=` - `OP_BATCHER_SIGNER_ADDRESS=` - - -### signer.endpoint - -Signer endpoint the client will connect to. - - - `--signer.endpoint=` - `--signer.endpoint=` - `OP_BATCHER_SIGNER_ENDPOINT=` - - -### signer.tls.ca - -tls ca cert path. The default value is `tls/ca.crt`. - - - `--signer.tls.ca=` - `--signer.tls.ca=tls/ca.crt` - `OP_BATCHER_SIGNER_TLS_CA=tls/ca.crt` - - -### signer.tls.cert - -tls cert path. The default value is `tls/tls.crt`. - - - `--signer.tls.cert=` - `--signer.tls.cert=tls/tls.crt` - `OP_BATCHER_SIGNER_TLS_CERT=` - - -### signer.tls.key - -tls key. The default value is `tls/tls.key`. - - - `--signer.tls.key=` - `--signer.tls.key=tls/tls.key` - `OP_BATCHER_SIGNER_TLS_KEY=` - - -### stopped - -Initialize the batcher in a stopped state. The batcher can be started using the -admin_startBatcher RPC. The default value is `false`. - - - `--stopped=` - `--stopped=false` - `OP_BATCHER_STOPPED=false` - - -### sub-safety-margin - -The batcher tx submission safety margin (in #L1-blocks) to subtract from a -channel's timeout and sequencing window, to guarantee safe inclusion of a -channel on L1. The default value is `10`. - - - `--sub-safety-margin=` - `--sub-safety-margin=10` - `OP_BATCHER_SUB_SAFETY_MARGIN=10s` - - -### target-num-frames - -The target number of frames to create per channel. Controls number of blobs per -blob tx, if using Blob DA. The default value is `1`. - - - `--target-num-frames=` - `--target-num-frames=1` - `OP_BATCHER_TARGET_NUM_FRAMES=1` - - -### txmgr.fee-limit-threshold - -The minimum threshold (in GWei) at which fee bumping starts to be capped. -Allows arbitrary fee bumps below this threshold. The default value is `100`. - - - `--txmgr.fee-limit-threshold=` - `--txmgr.fee-limit-threshold=100` - `OP_BATCHER_TXMGR_FEE_LIMIT_THRESHOLD=100` - - -### txmgr.min-basefee - -Enforces a minimum base fee (in GWei) to assume when determining tx fees. 1 -GWei by default. The default value is `1`. - - - `--txmgr.min-basefee=` - `--txmgr.min-basefee=1` - `OP_BATCHER_TXMGR_MIN_BASEFEE=1` - - -### txmgr.min-tip-cap - -Enforces a minimum tip cap (in GWei) to use when determining tx fees. 1 GWei by -default. The default value is `1`. - - - `--txmgr.min-tip-cap=` - `--txmgr.min-tip-cap=1` - `OP_BATCHER_TXMGR_MIN_TIP_CAP=1` - - -### txmgr.not-in-mempool-timeout - -Timeout for aborting a tx send if the tx does not make it to the mempool. The -default value is `2m0s`. - - - `--txmgr.not-in-mempool-timeout=` - `--txmgr.not-in-mempool-timeout=2m0s` - `OP_BATCHER_TXMGR_TX_NOT_IN_MEMPOOL_TIMEOUT=2m0s` - - -### txmgr.receipt-query-interval - -Frequency to poll for receipts. The default value is `12s`. - - - `--txmgr.receipt-query-interval=` - `--txmgr.receipt-query-interval=12s` - `OP_BATCHER_TXMGR_RECEIPT_QUERY_INTERVAL=12s` - - -### txmgr.send-timeout - -Timeout for sending transactions. If 0 it is disabled. The default value is -`0s`. - - - `--txmgr.send-timeout=` - `--txmgr.send-timeout=0s` - `OP_BATCHER_TXMGR_TX_SEND_TIMEOUT=0s` - - -### wait-node-sync - -Indicates if, during startup, the batcher should wait for a recent batcher tx -on L1 to finalize (via more block confirmations). This should help avoid -duplicate batcher txs. The default value is `false`. - - - `--wait-node-sync=` - `--wait-node-sync=false` - `OP_BATCHER_WAIT_NODE_SYNC=false` - - -## Miscellaneous - -### help - -Show help. The default value is false. - - - `--help=` - `--help=false` - - -### version - -Print the version. The default value is false. - - - `--version=` - `--version=false` - - -## Recommendations - -### Set your `OP_BATCHER_MAX_CHANNEL_DURATION` - - - The default value inside `op-batcher`, if not specified, is still `0`, which means channel duration tracking is disabled. - For very low throughput chains, this would mean to fill channels until close to the sequencing window and post the channel to `L1 SUB_SAFETY_MARGIN` L1 blocks before the sequencing window expires. - - -To minimize costs, we recommend setting your `OP_BATCHER_MAX_CHANNEL_DURATION` to target 5 hours, with a value of `1500` L1 blocks. When non-zero, this parameter is the max time (in L1 blocks, which are 12 seconds each) between which batches will be submitted to the L1. If you have this set to 5 for example, then your batcher will send a batch to the L1 every 5\*12=60 seconds. When using blobs, because 130kb blobs need to be purchased in full, if your chain doesn't generate at least \~130kb of data in those 60 seconds, then you'll be posting only partially full blobs and wasting storage. - -* We do not recommend setting any values higher than targeting 5 hours, as batches have to be submitted within the sequencing window which defaults to 12 hours for OP chains, otherwise your chain may experience a 12 hour long chain reorg. 5 hours is the longest length of time we recommend that still sits snugly within that 12 hour window to avoid affecting stability. -* If your chain fills up full blobs of data before the `OP_BATCHER_MAX_CHANNEL_DURATION` elapses, a batch will be submitted anyways - (e.g. even if the OP Mainnet batcher sets an `OP_BATCHER_MAX_CHANNEL_DURATION` of 5 hours, it will still be submitting batches every few minutes) - - - While setting an`OP_BATCHER_MAX_CHANNEL_DURATION` of `1500` results in the cheapest fees, it also means that your [safe head](https://github.com/ethereum-optimism/specs/blob/main/specs/glossary.md#safe-l2-head) can stall for up to 5 hours. - - * This will negatively impact apps on your chain that rely on the safe head for operation. While many apps can likely operate simply by following the unsafe head, often Centralized Exchanges or third party bridges wait until transactions are marked safe before processing deposits and withdrawal. - * Thus a larger gap between posting batches can result in significant delays in the operation of certain types of high-security applications. - - -### Configure your batcher to use multiple blobs - -The `op-batcher` has the capabilities to send multiple blobs per single blob transaction. This is accomplished by the use of multi-frame channels, see the [specs](https://specs.optimism.io/protocol/derivation.html#frame-format) for more technical details on channels and frames. - -A minimal batcher configuration (with env vars) to enable 6-blob batcher transactions is: - -``` - - OP_BATCHER_BATCH_TYPE=1 # span batches, optional - - OP_BATCHER_DATA_AVAILABILITY_TYPE=blobs - - OP_BATCHER_TARGET_NUM_FRAMES=6 # 6 blobs per tx - - OP_BATCHER_TXMGR_MIN_BASEFEE=2.0 # 2 gwei, might need to tweak, depending on gas market - - OP_BATCHER_TXMGR_MIN_TIP_CAP=2.0 # 2 gwei, might need to tweak, depending on gas market - - OP_BATCHER_RESUBMISSION_TIMEOUT=240s # wait 4 min before bumping fees -``` - -This enables blob transactions and sets the target number of frames to 6, which translates to 6 blobs per transaction. -The minimum tip cap and base fee are also lifted to 2 gwei because it is uncertain how easy it will be to get 6-blob transactions included and slightly higher priority fees should help. -The resubmission timeout is increased to a few minutes to give more time for inclusion before bumping the fees because current transaction pool implementations require a doubling of fees for blob transaction replacements. - -Multi-blob transactions are particularly useful for medium to high-throughput chains, where enough transaction volume exists to fill up 6 blobs in a reasonable amount of time. -You can use [this calculator](https://docs.google.com/spreadsheets/d/12VIiXHaVECG2RUunDSVJpn67IQp9NHFJqUsma2PndpE/edit) for your chain to determine what number of blobs are right for you, and what gas scalar configuration to use. Please also refer to guide on [Using Blobs](/builders/chain-operators/management/blobs) for chain operators. diff --git a/pages/builders/chain-operators/configuration/overview.mdx b/pages/builders/chain-operators/configuration/overview.mdx deleted file mode 100644 index e5976b068..000000000 --- a/pages/builders/chain-operators/configuration/overview.mdx +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: Chain Operator Configurations -lang: en-US -description: Learn the how to configure an OP Stack chain. ---- - -import { Callout, Steps } from 'nextra/components' - -# Chain operator configurations - -OP Stack chains can be configured for the Chain Operator's needs. Each -component of the stack has its own considerations. See the following for -documentation for details on configuring each piece. - - - - {

Rollup Configuration

} - - Deploying your OP Stack contracts requires creating a deployment configuration - JSON file. This defines the behavior of your network at its genesis. - * **Important Notes:** - * The Rollup Configuration sets parameters for the L1 smart contracts upon deployment. These parameters govern the behavior of your chain and are critical to its operation. - * Be aware that many of these values cannot be changed after deployment or require a complex process to update. - Carefully consider and validate all settings during configuration to avoid issues later. - - * [Rollup Configuration Documentation](/builders/chain-operators/configuration/rollup) - - {

Batcher Configuration

} - - The batcher is the service that submits the L2 Sequencer data to L1, to make - it available for verifiers. These configurations determine the batcher's - behavior. - - * [Batcher Configuration Documentation](/builders/chain-operators/configuration/batcher) - - {

Proposer Configuration

} - - The proposer is the service that submits the output roots to the L1. These - configurations determine the proposer's behavior. - - * [Proposer Configuration Documentation](/builders/chain-operators/configuration/proposer) - - {

Node Configuration

} - - The rollup node has a wide array of configurations for both the consensus and - execution clients. - - * [Node Configuration Documentation](/builders/node-operators/configuration/base-config) - -
- - diff --git a/pages/builders/chain-operators/configuration/proposer.mdx b/pages/builders/chain-operators/configuration/proposer.mdx deleted file mode 100644 index 4702f208c..000000000 --- a/pages/builders/chain-operators/configuration/proposer.mdx +++ /dev/null @@ -1,492 +0,0 @@ ---- -title: Proposer Configuration -lang: en-US -description: Learn the OP Stack proposer configurations. ---- - -import { Tabs } from 'nextra/components' - -# Proposer configuration - -This page list all configuration options for op-proposer. The op-proposer posts -the output roots to the L1, to make it available for verifiers. The following -options are from the `--help` in [v1.7.6](https://github.com/ethereum-optimism/optimism/releases/tag/v1.7.6). - -## Global options - -### active-sequencer-check-duration - -The duration between checks to determine the active sequencer endpoint. The -default value is `2m0s`. - - - `--active-sequencer-check-duration=` - `--active-sequencer-check-duration=2m0s` - `OP_PROPOSER_ACTIVE_SEQUENCER_CHECK_DURATION=2m0s` - - -### allow-non-finalized - -Allow the proposer to submit proposals for L2 blocks from non-finalized L1 -blocks. The default value is false. - - - `--allow-non-finalized=` - `--allow-non-finalized=false` - `OP_PROPOSER_ALLOW_NON_FINALIZED=false` - - -### fee-limit-multiplier - -The multiplier applied to fee suggestions to limit fee increases. The default -value is 5. - - - `--fee-limit-multiplier=` - `--fee-limit-multiplier=5` - `OP_PROPOSER_TXMGR_FEE_LIMIT_MULTIPLIER=5` - - -### game-factory-address - -Address of the DisputeGameFactory contract. - - - `--game-factory-address=` - `--game-factory-address=` - `OP_PROPOSER_GAME_FACTORY_ADDRESS=` - - -### game-type - -Dispute game type to create via the configured DisputeGameFactory. The default -value is 0. - - - `--game-type=` - `--game-type=0` - `OP_PROPOSER_GAME_TYPE=0` - - -### hd-path - -The HD path used to derive the sequencer wallet from the mnemonic. - - - `--hd-path=` - `--hd-path=` - `OP_PROPOSER_HD_PATH=` - - -### l1-eth-rpc - -HTTP provider URL for L1. - - - `--l1-eth-rpc=` - `--l1-eth-rpc=` - `OP_PROPOSER_L1_ETH_RPC=` - - -### l2-output-hd-path - -DEPRECATED: The HD path used to derive the l2output wallet from the mnemonic. - - - `--l2-output-hd-path=` - `--l2-output-hd-path=` - `OP_PROPOSER_L2_OUTPUT_HD_PATH=` - - -### l2oo-address - -Address of the L2OutputOracle contract. - - - `--l2oo-address=` - `--l2oo-address=` - `OP_PROPOSER_L2OO_ADDRESS=` - - -### log.color - -Color the log output if in terminal mode. The default value is false. - - - `--log.color=` - `--log.color=false` - `OP_PROPOSER_LOG_COLOR=false` - - -### log.format - -Format the log output. Supported formats: 'text', 'terminal', 'logfmt', 'json', -'json-pretty'. The default value is text. - - - `--log.format=` - `--log.format=text` - `OP_PROPOSER_LOG_FORMAT=text` - - -### log.level - -The lowest log level that will be output. The default value is INFO. - - - `--log.level=` - `--log.level=INFO` - `OP_PROPOSER_LOG_LEVEL=INFO` - - -### metrics.addr - -Metrics listening address. The default value is "0.0.0.0". - - - `--metrics.addr=` - `--metrics.addr=0.0.0.0` - `OP_PROPOSER_METRICS_ADDR=0.0.0.0` - - -### metrics.enabled - -Enable the metrics server. The default value is false. - - - `--metrics.enabled=` - `--metrics.enabled=false` - `OP_PROPOSER_METRICS_ENABLED=false` - - -### metrics.port - -Metrics listening port. The default value is 7300. - - - `--metrics.port=` - `--metrics.port=7300` - `OP_PROPOSER_METRICS_PORT=7300` - - -### mnemonic - -The mnemonic used to derive the wallets for the service. - - - `--mnemonic=` - `--mnemonic=` - `OP_PROPOSER_MNEMONIC=` - - -### network-timeout - -Timeout for all network operations. The default value is 10s. - - - `--network-timeout=` - `--network-timeout=10s` - `OP_PROPOSER_NETWORK_TIMEOUT=10s` - - -### num-confirmations - -Number of confirmations to wait after sending a transaction. The default value -is 10. - - - `--num-confirmations=` - `--num-confirmations=10` - `OP_PROPOSER_NUM_CONFIRMATIONS=10` - - -### poll-interval - -How frequently to poll L2 for new blocks. The default value is 6s. - - - `--poll-interval=` - `--poll-interval=6s` - `OP_PROPOSER_POLL_INTERVAL=6s` - - -### pprof.addr - -pprof listening address. The default value is "0.0.0.0". - - - `--pprof.addr=` - `--pprof.addr=0.0.0.0` - `OP_PROPOSER_PPROF_ADDR=0.0.0.0` - - -### pprof.enabled - -Enable the pprof server. The default value is false. - - - `--pprof.enabled=` - `--pprof.enabled=false` - `OP_PROPOSER_PPROF_ENABLED=false` - - -### pprof.path - -pprof file path. If it is a directory, the path is `{dir}/{profileType}.prof` - - - `--pprof.path=` - `--pprof.path=` - `OP_PROPOSER_PPROF_PATH=` - - -### pprof.port - -pprof listening port. The default value is 6060. - - - `--pprof.port=` - `--pprof.port=6060` - `OP_PROPOSER_PPROF_PORT=6060` - - -### pprof.type - -pprof profile type. One of cpu, heap, goroutine, threadcreate, block, mutex, -allocs - - - `--pprof.type=` - `--pprof.type=` - `OP_PROPOSER_PPROF_TYPE=` - - -### private-key - -The private key to use with the service. Must not be used with mnemonic. - - - `--private-key=` - `--private-key=` - `OP_PROPOSER_PRIVATE_KEY=` - - -### proposal-interval - -Interval between submitting L2 output proposals when the dispute game factory -address is set. The default value is 0s. - - - `--proposal-interval=` - `--proposal-interval=0s` - `OP_PROPOSER_PROPOSAL_INTERVAL=0s` - - -### resubmission-timeout - -Duration we will wait before resubmitting a transaction to L1. The default -value is 48s. - - - `--resubmission-timeout=` - `--resubmission-timeout=48s` - `OP_PROPOSER_RESUBMISSION_TIMEOUT=48s` - - -### rollup-rpc - -HTTP provider URL for the rollup node. A comma-separated list enables the -active rollup provider. - - - `--rollup-rpc=` - `--rollup-rpc=` - `OP_PROPOSER_ROLLUP_RPC=` - - -### rpc.addr - -rpc listening address. The default value is "0.0.0.0". - - - `--rpc.addr=` - `--rpc.addr=0.0.0.0` - `OP_PROPOSER_RPC_ADDR=0.0.0.0` - - -### rpc.enable-admin - -Enable the admin API. The default value is false. - - - `--rpc.enable-admin=` - `--rpc.enable-admin=false` - `OP_PROPOSER_RPC_ENABLE_ADMIN=false` - - -### rpc.port - -rpc listening port. The default value is 8545. - - - `--rpc.port=` - `--rpc.port=8545` - `OP_PROPOSER_RPC_PORT=8545` - - -### safe-abort-nonce-too-low-count - -Number of ErrNonceTooLow observations required to give up on a tx at a -particular nonce without receiving confirmation. The default value is 3. - - - `--safe-abort-nonce-too-low-count=` - `--safe-abort-nonce-too-low-count=3` - `OP_PROPOSER_SAFE_ABORT_NONCE_TOO_LOW_COUNT=3` - - -### signer.address - -Address the signer is signing transactions for. - - - `--signer.address=` - `--signer.address=` - `OP_PROPOSER_SIGNER_ADDRESS=` - - -### signer.endpoint - -Signer endpoint the client will connect to. - - - `--signer.endpoint=` - `--signer.endpoint=` - `OP_PROPOSER_SIGNER_ENDPOINT=` - - -### signer.tls.ca - -tls ca cert path. The default value is "tls/ca.crt". - - - `--signer.tls.ca=` - `--signer.tls.ca=tls/ca.crt` - `OP_PROPOSER_SIGNER_TLS_CA=tls/ca.crt` - - -### signer.tls.cert - -tls cert path. The default value is "tls/tls.crt". - - - `--signer.tls.cert=` - `--signer.tls.cert=tls/tls.crt` - `OP_PROPOSER_SIGNER_TLS_CERT=tls/tls.crt` - - -### signer.tls.key - -tls key. The default value is "tls/tls.key". - - - `--signer.tls.key=` - `--signer.tls.key=tls/tls.key` - `OP_PROPOSER_SIGNER_TLS_KEY=tls/tls.key` - - -### txmgr.fee-limit-threshold - -The minimum threshold (in GWei) at which fee bumping starts to be capped. The -default value is 100. - - - `--txmgr.fee-limit-threshold=` - `--txmgr.fee-limit-threshold=100` - `OP_PROPOSER_TXMGR_FEE_LIMIT_THRESHOLD=100` - - -### txmgr.min-basefee - -Enforces a minimum base fee (in GWei) to assume when determining tx fees. The -default value is 1. - - - `--txmgr.min-basefee=` - `--txmgr.min-basefee=1` - `OP_PROPOSER_TXMGR_MIN_BASEFEE=1` - - -### txmgr.min-tip-cap - -Enforces a minimum tip cap (in GWei) to use when determining tx fees. The -default value is 1. - - - `--txmgr.min-tip-cap=` - `--txmgr.min-tip-cap=1` - `OP_PROPOSER_TXMGR_MIN_TIP_CAP=1` - - -### txmgr.not-in-mempool-timeout - -Timeout for aborting a tx send if the tx does not make it to the mempool. The -default value is 2m0s. - - - `--txmgr.not-in-mempool-timeout=` - `--txmgr.not-in-mempool-timeout=2m0s` - `OP_PROPOSER_TXMGR_TX_NOT_IN_MEMPOOL_TIMEOUT=2m0s` - - -### txmgr.receipt-query-interval - -Frequency to poll for receipts. The default value is 12s. - - - `--txmgr.receipt-query-interval=` - `--txmgr.receipt-query-interval=12s` - `OP_PROPOSER_TXMGR_RECEIPT_QUERY_INTERVAL=12s` - - -### txmgr.send-timeout - -Timeout for sending transactions. If 0 it is disabled. The default value is 0s. - - - `--txmgr.send-timeout=` - `--txmgr.send-timeout=0s` - `OP_PROPOSER_TXMGR_TX_SEND_TIMEOUT=0s` - - -### wait-node-sync - -Indicates if, during startup, the proposer should wait for the rollup node to -sync to the current L1 tip before proceeding with its driver loop. The default -value is false. - - - `--wait-node-sync=` - `--wait-node-sync=false` - `OP_PROPOSER_WAIT_NODE_SYNC=false` - - -## Miscellaneous - -### help - -Show help. The default value is false. - - - `--help=` - `--help=false` - - -### version - -Print the version. The default value is false. - - - `--version=` - `--version=false` - diff --git a/pages/builders/chain-operators/configuration/rollup.mdx b/pages/builders/chain-operators/configuration/rollup.mdx deleted file mode 100644 index 52600572a..000000000 --- a/pages/builders/chain-operators/configuration/rollup.mdx +++ /dev/null @@ -1,1241 +0,0 @@ ---- -title: Rollup deployment configuration -lang: en-US -description: Learn about the OP Stack rollup deployment configurations. ---- - -import { Callout } from 'nextra/components' - -# Rollup deployment configuration - -New OP Stack blockchains are currently configured with a deployment -configuration JSON file inside that is passed into the smart contract -[deployment script](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/scripts/deploy/Deploy.s.sol). -You can see example deployment configuration files in the -[deploy-config directory](https://github.com/ethereum-optimism/optimism/tree/develop/packages/contracts-bedrock/deploy-config). -This document highlights the deployment configurations and their values. - - - The Rollup configuration is an active work in progress and will likely evolve - significantly as time goes on. If something isn't working about your - configuration, you can refer to the [source code](https://github.com/ethereum-optimism/optimism/blob/develop/op-chain-ops/genesis/config.go). - - - - Standard configuration is the set of requirements for an OP Stack chain to be - considered a Standard Chain within the Superchain. These requirements are - currently a draft, pending governance approval. For more details, please see - this [governance thread](https://gov.optimism.io/t/season-6-draft-standard-rollup-charter/8135) - and the actual requirements in the [OP Stack Configurability Specification](https://specs.optimism.io/protocol/configurability.html). - - -## Deployment configuration values - -These values are provided to the deployment configuration JSON file when -deploying the L1 contracts. - -### Offset values - -These offset values determine when network upgrades (hardforks) activate on -your blockchain. - -#### l2GenesisRegolithTimeOffset - -L2GenesisRegolithTimeOffset is the number of seconds after genesis block that -Regolith hard fork activates. Set it to 0 to activate at genesis. Nil to -disable Regolith. - -* **Type:** Number of seconds -* **Default value:** nil -* **Recommended value:** "0x0" -* **Notes:** -* **Standard Config Requirement:** Network upgrade (hardfork) is activated. - -*** - -#### l2GenesisCanyonTimeOffset - -L2GenesisCanyonTimeOffset is the number of seconds after genesis block that -Canyon hard fork activates. Set it to 0 to activate at genesis. Nil to -disable Canyon. - -* **Type:** Number of seconds -* **Default value:** nil -* **Recommended value:** "0x0" -* **Notes:** -* **Standard Config Requirement:** Network upgrade (hardfork) is activated. - -*** - -#### l2GenesisDeltaTimeOffset - -L2GenesisDeltaTimeOffset is the number of seconds after genesis block that -Delta hard fork activates. Set it to 0 to activate at genesis. Nil to disable -Delta. - -* **Type:** Number of seconds -* **Default value:** nil -* **Recommended value:** "0x0" -* **Notes:** -* **Standard Config Requirement:** Network upgrade (hardfork) is activated. - -*** - -#### l2GenesisEcotoneTimeOffset - -L2GenesisEcotoneTimeOffset is the number of seconds after genesis block that -Ecotone hard fork activates. Set it to 0 to activate at genesis. Nil to disable -Ecotone. - -* **Type:** Number of seconds -* **Default value:** nil -* **Recommended value:** "0x0" -* **Notes:** -* **Standard Config Requirement:** Network upgrade (hardfork) is activated. - -*** - -#### l2GenesisFjordTimeOffset - -L2GenesisFjordTimeOffset is the number of seconds after genesis block that -Fjord hard fork activates. Set it to 0 to activate at genesis. Nil to -disable Fjord. - -* **Type:** Number of seconds -* **Default value:** nil -* **Recommended value:** "0x0" -* **Notes:** -* **Standard Config Requirement:** Network upgrade (hardfork) is activated. - -*** - -#### l2GenesisInteropTimeOffset - -L2GenesisInteropTimeOffset is the number of seconds after genesis block that -the Interop hard fork activates. Set it to 0 to activate at genesis. Nil to -disable Interop. - -* **Type:** Number of seconds -* **Default value:** nil -* **Recommended value:** -* **Notes:** Interoperability is still [experimental](https://specs.optimism.io/interop/overview.html). -* **Standard Config Requirement:** Non-standard feature. - -*** - -#### l1CancunTimeOffset - -When Cancun activates. Relative to L1 genesis. - -* **Type:** Number of seconds -* **Default value:** None -* **Recommended value:** "0x0" -* **Notes:** -* **Standard Config Requirement:** Network upgrade (hardfork) is activated. - -*** - -### Admin addresses - -#### finalSystemOwner - -FinalSystemOwner is the L1 system owner. It owns any ownable L1 contracts. - -* **Type:** L1 Address -* **Default value:** None -* **Recommended value:** It is recommended to have a single admin - address to retain a common security model. -* **Notes:** Must not be `address(0)` -* **Standard Config Requirement:** Must be Chain Governor or Servicer. -However, the L1 ProxyAdmin owner must be held by Optimism Security Council. -At the moment, this is L1 ProxyAdmin owner is transferred from the Chain -Governor or Servicer to [0x5a0Aae59D09fccBdDb6C6CcEB07B7279367C3d2A](https://etherscan.io/address/0x5a0Aae59D09fccBdDb6C6CcEB07B7279367C3d2A) - -*** - -#### proxyAdminOwner - -ProxyAdmin contract owner on the L2, which owns all of the Proxy contracts for -every predeployed contract in the range `0x42...0000` to `0x42...2048`. This -makes predeployed contracts easily upgradeable. - -* **Type:** L2 Address -* **Default value:** None -* **Recommended value:** It is recommended to have a single admin - address to retain a common security model. -* **Notes:** Must not be `address(0)` -* **Standard Config Requirement:** - -*** - -### Proxy addresses - -#### l1StandardBridgeProxy - -L1StandardBridgeProxy represents the address of the L1StandardBridgeProxy on L1 -and is used as part of building the L2 genesis state. - -* **Type:** L1 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)` -* **Standard Config Requirement:** Implementation contract must be the most -up-to-date, governance-approved version of the OP Stack codebase, and, if the -chain has been upgraded in the past, that the previous versions were a standard -release of the codebase. - -*** - -#### l1CrossDomainMessengerProxy - -L1CrossDomainMessengerProxy represents the address of the -L1CrossDomainMessengerProxy on L1 and is used as part of building the L2 -genesis state. - -* **Type:** L1 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)` -* **Standard Config Requirement:** Implementation contract must be the most -up-to-date, governance-approved version of the OP Stack codebase, and, if the -chain has been upgraded in the past, that the previous versions were a standard -release of the codebase. - -*** - -#### l1ERC721BridgeProxy - -L1ERC721BridgeProxy represents the address of the L1ERC721Bridge on L1 and is -used as part of building the L2 genesis state. - -* **Type:** L1 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)` -* **Standard Config Requirement:** Implementation contract must be the most -up-to-date, governance-approved version of the OP Stack codebase, and, if the -chain has been upgraded in the past, that the previous versions were a standard -release of the codebase. - -*** - -#### systemConfigProxy - -SystemConfigProxy represents the address of the SystemConfigProxy on L1 and is -used as part of the derivation pipeline. - -* **Type:** L1 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)` -* **Standard Config Requirement:** Implementation contract must be the most -up-to-date, governance-approved version of the OP Stack codebase, and, if the -chain has been upgraded in the past, that the previous versions were a standard -release of the codebase. - -*** - -#### optimismPortalProxy - -OptimismPortalProxy represents the address of the OptimismPortalProxy on L1 and -is used as part of the derivation pipeline. - -* **Type:** L1 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)` -* **Standard Config Requirement:** Implementation contract must be the most -up-to-date, governance-approved version of the OP Stack codebase, and, if the -chain has been upgraded in the past, that the previous versions were a standard -release of the codebase. - -*** - -### Blocks - -These fields apply to L2 blocks: Their timing, when they need to be written to L1, and how they get written. - -#### l2BlockTime - -Number of seconds between each L2 block. Must be \< L1 block time (12 on mainnet and Sepolia). - -* **Type:** Number of seconds -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `0`. Must be less than the L1 blocktime and a whole number. -* **Standard Config Requirement:** 1 or 2 seconds - -*** - -#### maxSequencerDrift - -How far the L2 timestamp can differ from the actual L1 timestamp. - -* **Type:** Number of seconds -* **Default value:** None -* **Recommended value:** 1800 -* **Notes:** Must not be `0`. 1800 (30 minutes) is the constant that takes -effect with the [Fjord activation](/builders/node-operators/network-upgrades#fjord). -* **Standard Config Requirement:** - -*** - -#### sequencerWindowSize - -Maximum number of L1 blocks that a Sequencer can wait to incorporate the -information in a specific L1 block. For example, if the window is 10 then the -information in L1 block n must be incorporated by L1 block n+10. - -* **Type:** Number of blocks -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `0`. 3600 (12 hours) is suggested. -* **Standard Config Requirement:** 3\_600 base layer blocks (12 hours for an - L2 on Ethereum, assuming 12 second L1 blocktime). This is an important value - for constraining the sequencer's ability to re-order transactions; higher - values would pose a risk to user protections. - -*** - -#### channelTimeout - -Maximum number of L1 blocks that a transaction channel frame can be considered -valid. A transaction channel frame is a chunk of a compressed batch of -transactions. After the timeout, the frame is dropped. - -* **Type:** Number of blocks -* **Default value:** 50 -* **Recommended value:** -* **Notes:** This default value was introduced in the [Granite network upgrade](/builders/node-operators/network-upgrades#granite) -* **Standard Config Requirement:** - -*** - -#### p2pSequencerAddress - -Address of the key that the Sequencer uses to sign blocks on the p2p network. - -* **Type:** L1 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** No requirement. - -*** - -#### batchInboxAddress - -Address that Sequencer transaction batches are sent to on L1. - -* **Type:** L1 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Recommendation:** Convention is `versionByte` || -`keccak256(bytes32(chainId))[:19]`, where || denotes concatenation, -`versionByte` is `0x00`, and `chainId` is a `uint256`. -This is to cover the full range of chain ids, to the full `uint256` size. -Here's how you can get this address: - ```solidity - bytes32 hash = keccak256(abi.encodePacked(bytes32(uint256(chainId)))); - # [:19] means taking the first 19 bytes of the hash - # then preppending a version byte of zero: 0x00 - # `0x00{hash[:19]}` - ``` - -*** - -#### batchSenderAddress - -Address that nodes will filter for when searching for Sequencer transaction -batches being sent to the batchInboxAddress. Can be updated later via the -SystemConfig contract on L1. - -* **Type:** L1 Address -* **Default value:** Batcher, an address for which you own the private key. -* **Recommended value:** -* **Notes:** Must not be `address(0)` -* **Standard Config Requirement:** No requirement. - -*** - -#### systemConfigStartBlock - -SystemConfigStartBlock represents the block at which the op-node should start -syncing from. It is an override to set this value on legacy networks where it -is not set by default. It can be removed once all networks have this value set -in their storage. - -* **Type:** L2 Block Number -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** The block where the SystemConfig was - initialized. - -*** - -### Chain Information - -#### l1StartingBlockTag - -Block tag for the L1 block where the L2 chain will begin syncing from. -It is generally recommended to use a finalized block to avoid issues with reorgs. - -* **Type:** Block hash -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `0`. -* **Standard Config Requirement:** - -*** - -#### l1ChainID - -Chain ID of the L1 chain. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `0`. 1 for L1 Ethereum mainnet, 11155111 for the - Sepolia test network, and See [here](https://chainlist.org/?testnets=true) - for other blockchains. -* **Standard Config Requirement:** 1 (Ethereum) - -*** - -#### l2ChainID - -Chain ID of the L2 chain. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `0`. For security reasons, should be unique. Chains -should add their chain IDs to [ethereum-lists/chains](https://github.com/ethereum-lists/chains). -* **Standard Config Requirement:** Foundation-approved, globally unique value. - -*** - -#### l2GenesisBlockExtraData - -L2GenesisBlockExtraData is configurable extradata. Will default to -\[]byte("BEDROCK") if left unspecified. - -* **Type:** Number -* **Default value:** \[]byte("BEDROCK") -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### superchainConfigGuardian - -SuperchainConfigGuardian represents the GUARDIAN account in the -SuperchainConfig. Has the ability to pause withdrawals. - -* **Type:** L1 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)` -* **Standard Config Requirement:** [0x09f7150D8c019BeF34450d6920f6B3608ceFdAf2](https://etherscan.io/address/0x09f7150D8c019BeF34450d6920f6B3608ceFdAf2) -A 1/1 Safe owned by the Security Council Safe, with the [Deputy Guardian Module](https://specs.optimism.io/protocol/safe-extensions.html#deputy-guardian-module) -enabled to allow the Optimism Foundation to act as Guardian. - -*** - -### Gas - -* **Standard Config Requirement:** Set such that Fee Margin is between 0 and - 50%. -* **Standard Config Requirement:** No higher than 200\_000\_000 gas. Chain - operators are driven to maintain a stable and reliable chain. When considering - a change to this value, careful deliberation is necessary. - -#### l2GenesisBlockGasLimit - -L2GenesisBlockGasLimit represents the chain's block gas limit. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `0`. Must be greater than `MaxResourceLimit` + `SystemTxMaxGas`. -* **Standard Config Requirement:** - -*** - -#### l2GenesisBlockBaseFeePerGas - -L2GenesisBlockBaseFeePerGas represents the base fee per gas. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** L2 genesis block base fee per gas cannot be `nil`. -* **Standard Config Requirement:** - -*** - -#### eip1559Elasticity - -EIP1559Elasticity is the elasticity of the EIP1559 fee market. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `0`. -* **Standard Config Requirement:** - -*** - -#### eip1559Denominator - -EIP1559Denominator is the denominator of EIP1559 base fee market. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `0`. -* **Standard Config Requirement:** - -*** - -#### eip1559DenominatorCanyon - -EIP1559DenominatorCanyon is the denominator of EIP1559 base fee market when -Canyon is active. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** 250 -* **Notes:** Must not be `0` if Canyon is activated. -* **Standard Config Requirement:** - -*** - -#### gasPriceOracleBaseFeeScalar - -GasPriceOracleBaseFeeScalar represents the value of the base fee scalar used -for fee calculations. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** Should not be `0`. -* **Standard Config Requirement:** - -*** - -#### gasPriceOracleBlobBaseFeeScalar - -GasPriceOracleBlobBaseFeeScalar represents the value of the blob base fee -scalar used for fee calculations. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** Should not be `0`. -* **Standard Config Requirement:** - -*** - -### Proposal fields - -These fields apply to output root proposals. The -l2OutputOracleSubmissionInterval is configurable, see the section below for -guidance. - -#### l2OutputOracleStartingBlockNumber - -Block number of the first OP Stack block. Typically this should be zero, but -this may be non-zero for networks that have been upgraded from a legacy system -(like OP Mainnet). Will be removed with the addition of permissionless -proposals. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** 0 -* **Notes:** Should be `0` for new chains. -* **Standard Config Requirement:** - -*** - -#### l2OutputOracleStartingTimestamp - -Timestamp of the first OP Stack block. This MUST be the timestamp corresponding -to the block defined by the l1StartingBlockTag. Will be removed with the -addition of permissionless proposals. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** this MUST be the timestamp corresponding to the block defined by - the l1StartingBlockTag. -* **Standard Config Requirement:** - -*** - -#### l2OutputOracleSubmissionInterval - -Number of blocks between proposals to the L2OutputOracle. Will be removed with -the addition of permissionless proposals. - -* **Type:** Number of blocks -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `0`. 120 (4 minutes) is suggested. -* **Standard Config Requirement:** - -*** - -#### finalizationPeriodSeconds - -Number of seconds that a proposal must be available to challenge before it is -considered finalized by the OptimismPortal contract. - -* **Type:** Number of seconds -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `0`. Recommend 12 on test networks, seven days on - production ones. -* **Standard Config Requirement:** 7 days. High security. Excessively safe - upper bound that leaves enough time to consider social layer solutions to a - hack if necessary. Allows enough time for other network participants to - challenge the integrity of the corresponding output root. - -*** - -#### l2OutputOracleProposer - -Address that is allowed to submit output proposals to the L2OutputOracle -contract. Will be removed when the OP Stack has permissionless proposals. - -* **Type:** L1 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)` -* **Standard Config Requirement:** No requirement. This role is only active -when the OptimismPortal respected game type is `PERMISSIONED_CANNON`. The -L1ProxyAdmin sets the implementation of the `PERMISSIONED_CANNON` game type. -Thus, it determines the proposer configuration of the permissioned dispute game. - -*** - -#### l2OutputOracleChallenger - -Address that is allowed to challenge output proposals submitted to the -L2OutputOracle. Will be removed when the OP Stack has permissionless -challenges. - -* **Type:** L1 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)`. It is recommended to have a single admin - address to retain a common security model. -* **Standard Config Requirement:** - -*** - -### Fee recipients - -#### baseFeeVaultRecipient - -BaseFeeVaultRecipient represents the recipient of fees accumulated in the -BaseFeeVault. Can be an account on L1 or L2, depending on the -BaseFeeVaultWithdrawalNetwork value. - -* **Type:** L1 or L2 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)`. It is recommended to have a single admin - address to retain a common security model. -* **Standard Config Requirement:** - -*** - -#### l1FeeVaultRecipient - -L1FeeVaultRecipient represents the recipient of fees accumulated in the -L1FeeVault. Can be an account on L1 or L2, depending on the -L1FeeVaultWithdrawalNetwork value. - -* **Type:** L1 or L2 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)`. It is recommended to have a single admin - address to retain a common security model. -* **Standard Config Requirement:** - -*** - -#### sequencerFeeVaultRecipient - -SequencerFeeVaultRecipient represents the recipient of fees accumulated in the -SequencerFeeVault. Can be an account on L1 or L2, depending on the -SequencerFeeVaultWithdrawalNetwork value. - -* **Type:** L1 or L2 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)`. It is recommended to have a single admin - address to retain a common security model. -* **Standard Config Requirement:** - -*** - -### Minimum fee withdrawal amounts - -Withdrawals to L1 are expensive and the minimum fee is to prevent overhead -costs of continuous tiny withdrawals. If the withdrawal execution costs more -than the fee-reward, then the fee Must not be collected economically. - -*** - -#### baseFeeVaultMinimumWithdrawalAmount - -BaseFeeVaultMinimumWithdrawalAmount represents the minimum withdrawal amount -for the BaseFeeVault. - -* **Type:** Number in wei -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### l1FeeVaultMinimumWithdrawalAmount - -L1FeeVaultMinimumWithdrawalAmount represents the minimum withdrawal amount for -the L1FeeVault. - -* **Type:** Number in wei -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### sequencerFeeVaultWithdrawalAmount - -SequencerFeeVaultMinimumWithdrawalAmount represents the minimum withdrawal -amount for the SequencerFeeVault. - -* **Type:** Number in wei -* **Recommended value:** -* **Default value:** None -* **Notes:** -* **Standard Config Requirement:** - -*** - -### Withdrawal network - -*** - -#### baseFeeVaultWithdrawalNetwork - -BaseFeeVaultWithdrawalNetwork represents the withdrawal network for the -BaseFeeVault. value of 0 will withdraw ETH to the recipient address on L1 and -a value of 1 will withdraw ETH to the recipient address on L2. - -* **Type:** Number representing network enum -* **Default value:** None -* **Recommended value:** -* **Notes:** Withdrawals to Ethereum are more expensive. -* **Standard Config Requirement:** - -*** - -#### l1FeeVaultWithdrawalNetwork - -L1FeeVaultWithdrawalNetwork represents the withdrawal network for the -L1FeeVault. A value of 0 will withdraw ETH to the recipient address on L1 and a -value of 1 will withdraw ETH to the recipient address on L2. - -* **Type:** Number representing network enum -* **Default value:** None -* **Recommended value:** -* **Notes:** Withdrawals to Ethereum are more expensive. -* **Standard Config Requirement:** - -*** - -#### sequencerFeeVaultWithdrawalNetwork - -SequencerFeeVaultWithdrawalNetwork represents the withdrawal network for the -SequencerFeeVault. A value of 0 will withdraw ETH to the recipient address on -L1 and a value of 1 will withdraw ETH to the recipient address on L2. - -* **Type:** Number representing network enum -* **Default value:** None -* **Recommended value:** -* **Notes:** Withdrawals to Ethereum are more expensive. -* **Standard Config Requirement:** - -*** - -### Fault proofs - -*** - -#### faultGameAbsolutePrestate - -FaultGameAbsolutePrestate is the absolute prestate of Cannon. This is computed -by generating a proof from the 0th -> 1st instruction and grabbing the prestate -from the output JSON. All honest challengers should agree on the setup state of -the program. - -* **Type:** Hash -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### faultGameMaxDepth - -FaultGameMaxDepth is the maximum depth of the position tree within the fault -dispute game. `2^{FaultGameMaxDepth}` is how many instructions the execution -trace bisection game supports. Ideally, this should be conservatively set so -that there is always enough room for a full Cannon trace. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### faultGameClockExtension - -FaultGameClockExtension is the amount of time that the dispute game will set -the potential grandchild claim's, clock to, if the remaining time is less than -this value at the time of a claim's creation. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### faultGameMaxClockDuration - -FaultGameMaxClockDuration is the maximum amount of time that may accumulate on -a team's chess clock before they may no longer respond. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### faultGameGenesisBlock - -FaultGameGenesisBlock is the block number for genesis. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### faultGameGenesisOutputRoot - -FaultGameGenesisOutputRoot is the output root for the genesis block. - -* **Type:** Hash -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### faultGameSplitDepth - -FaultGameSplitDepth is the depth at which the fault dispute game splits from -output roots to execution trace claims. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### faultGameWithdrawalDelay - -FaultGameWithdrawalDelay is the number of seconds that users must wait before -withdrawing ETH from a fault game. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### preimageOracleMinProposalSize - -PreimageOracleMinProposalSize is the minimum number of bytes that a large -preimage oracle proposal can be. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### preimageOracleChallengePeriod - -PreimageOracleChallengePeriod is the number of seconds that challengers have to -challenge a large preimage proposal. - -* **Type:** Number of Seconds -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### proofMaturityDelaySeconds - -ProofMaturityDelaySeconds is the number of seconds that a proof must be mature -before it can be used to finalize a withdrawal. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** Should not be `0`. -* **Standard Config Requirement:** - -*** - -#### disputeGameFinalityDelaySeconds - -DisputeGameFinalityDelaySeconds is an additional number of seconds a dispute -game must wait before it can be used to finalize a withdrawal. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** Should not be `0`. -* **Standard Config Requirement:** - -*** - -#### respectedGameType - -RespectedGameType is the dispute game type that the OptimismPortal contract -will respect for finalizing withdrawals. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### useFaultProofs - -UseFaultProofs is a flag that indicates if the system is using fault proofs -instead of the older output oracle mechanism. - -* **Type:** Boolean -* **Default value:** None -* **Recommended value:** -* **Notes:** You should understand the implications of running a Fault Proof - chain. -* **Standard Config Requirement:** - -*** - -### Custom Gas Token - -The Custom Gas Token configuration lets OP Stack chain operators deploy their -chain allowing a specific ERC-20 token to be deposited in as the native token -to pay for gas fees. Learn more [here](/stack/beta-features/custom-gas-token). - -*** - -#### useCustomGasToken - -UseCustomGasToken is a flag to indicate that a custom gas token should be used. - -* **Type:** boolean -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** Non-standard feature. - -*** - -#### customGasTokenAddress - -CustomGasTokenAddress is the address of the ERC20 token to be used to pay for -gas on L2. - -* **Type:** Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be `address(0)`. -* **Standard Config Requirement:** Non-standard feature. - -### Alt-DA Mode - -Alt-DA Mode enables seamless integration of various Data Availability (DA) -Layers, regardless of their commitment type, into the OP Stack. This allows -any chain operator to launch an OP Stack chain using their favorite DA Layer -for sustainably low costs. Learn more [here](/stack/beta-features/alt-da-mode). - -*** - -#### usePlasma - -UsePlasma is a flag that indicates if the system is using op-plasma - -* **Type:** bool -* **Recommended value:** -* **Default value:** None -* **Notes:** -* **Standard Config Requirement:** Non-standard feature. - -*** - -#### daCommitmentType - -DACommitmentType specifies the allowed commitment - -* **Type:** string -* **Default value:** None -* **Notes:** DACommitmentType must be either KeccakCommitment or -GenericCommitment. However, KeccakCommitment will be deprecated soon. -* **Recommended value:** GenericCommitment -* **Standard Config Requirement:** Non-standard feature. - -*** - -#### daChallengeWindow - -DAChallengeWindow represents the block interval during which the availability -of a data commitment can be challenged. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** DAChallengeWindow must not be 0 when using plasma mode -* **Standard Config Requirement:** Non-standard feature. - -*** - -#### daResolveWindow - -DAResolveWindow represents the block interval during which a data availability -challenge can be resolved. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** DAChallengeWindow must not be 0 when using plasma mode -* **Standard Config Requirement:** Non-standard feature. - -*** - -#### daBondSize - -DABondSize represents the required bond size to initiate a data availability -challenge. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** Non-standard feature. - -*** - -#### daResolverRefundPercentage - -DAResolverRefundPercentage represents the percentage of the resolving cost to -be refunded to the resolver such as 100 means 100% refund. - -* **Type:** Number -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** Non-standard feature. - -*** - -#### daChallengeProxy - -DAChallengeProxy represents the L1 address of the DataAvailabilityChallenge -contract. - -* **Type:** Address -* **Default value:** None -* **Recommended value:** -* **Notes:** Must not be address(0) when using plasma mode -* **Standard Config Requirement:** Non-standard feature. - -*** - -### Interoperability - -*** - -#### useInterop - -UseInterop is a flag that indicates if the system is using interop. - -* **Type:** boolean -* **Default value:** None -* **Recommended value:** false -* **Notes:** Interoperability is still [experimental](https://specs.optimism.io/interop/overview.html). -* **Standard Config Requirement:** Non-standard feature. - -*** - -### Governance - -*** - -#### enableGovernance - -EnableGovernance determines whether to include governance token predeploy. - -* **Type:** boolean -* **Default value:** None -* **Recommended value:** false -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### governanceTokenSymbol - -GovernanceTokenSymbol represents the ERC20 symbol of the GovernanceToken. - -* **Type**: string -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### governanceTokenName - -GovernanceTokenName represents the ERC20 name of the GovernanceToken - -* **Type**: string -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### governanceTokenOwner - -GovernanceTokenOwner represents the owner of the GovernanceToken. Has the -ability to mint and burn tokens. - -* **Type**: L2 Address -* **Default value:** None -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -### Miscellaneous - -*** - -#### fundDevAccounts - -FundDevAccounts determines whether to fund the dev accounts. Should only -be used during devnet deployments. - -* **Type**: Boolean -* **Default value:** -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### requiredProtocolVersion - -RequiredProtocolVersion indicates the protocol version that nodes are -recommended to adopt, to stay in sync with the network. - -* **Type**: String -* **Default value:** -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -#### recommendedProtocolVersion - -RecommendedProtocolVersion indicates the protocol version that nodes are -recommended to adopt, to stay in sync with the network. - -* **Type**: String -* **Default value:** -* **Recommended value:** -* **Notes:** -* **Standard Config Requirement:** - -*** - -### Deprecated - -*** - -#### (**DEPRECATED**) gasPriceOracleScalar - -GasPriceOracleScalar represents the initial value of the gas scalar in the -GasPriceOracle predeploy. Deprecated: Since Ecotone, this field is superseded -by GasPriceOracleBaseFeeScalar and GasPriceOracleBlobBaseFeeScalar. - -*** - -#### (**DEPRECATED**) gasPriceOracleOverhead - -GasPriceOracleOverhead represents the initial value of the gas overhead in the -GasPriceOracle predeploy. Deprecated: Since Ecotone, this field is superseded -by GasPriceOracleBaseFeeScalar and GasPriceOracleBlobBaseFeeScalar. - -*** - -#### (**DEPRECATED**) deploymentWaitConfirmations - -DeploymentWaitConfirmations is the number of confirmations to wait during -deployment. This is DEPRECATED and should be removed in a future PR. - -*** - -#### (**DEPRECATED**) numDeployConfirmations - -Number of confirmations to wait when deploying smart contracts to L1. diff --git a/pages/builders/chain-operators/deploy.mdx b/pages/builders/chain-operators/deploy.mdx deleted file mode 100644 index 87e4184f6..000000000 --- a/pages/builders/chain-operators/deploy.mdx +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Deploy -lang: en-US -description: Information on OP Stack genesis creation, deployment overview, and smart contract deployment. ---- - -import { Card, Cards } from 'nextra/components' - -# Deploy - -This section provides information on OP Stack genesis creation, deployment overview, and smart contract deployment. You'll find guides and overviews to help you understand and work with these topics. - - - - - - - - diff --git a/pages/builders/chain-operators/deploy/_meta.json b/pages/builders/chain-operators/deploy/_meta.json deleted file mode 100644 index 1cf642283..000000000 --- a/pages/builders/chain-operators/deploy/_meta.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "overview": "Overview", - "smart-contracts": "Contract deployment", - "genesis": "Genesis creation" - } - \ No newline at end of file diff --git a/pages/builders/chain-operators/deploy/genesis.mdx b/pages/builders/chain-operators/deploy/genesis.mdx deleted file mode 100644 index e8df81d83..000000000 --- a/pages/builders/chain-operators/deploy/genesis.mdx +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: OP Stack genesis creation -lang: en-US -description: Learn how to create a genesis file for the OP Stack. ---- - -import { Callout } from 'nextra/components' - -# OP Stack genesis creation - - -This page is out of date and shows the legacy method for genesis file creation. -For the latest recommended method, use [op-deployer](/builders/chain-operators/tools/op-deployer). - - -The following guide shows you how to generate the L2 genesis file `genesis.json`. This is a JSON -file that represents the L2 genesis. You will provide this file to the -execution client (op-geth) to initialize your network. There is also the rollup configuration file, `rollup.json`, which will be -provided to the consensus client (op-node). - -## Solidity script - -At the time of this writing, the preferred method for genesis generation is to use the foundry script -located in the monorepo to generate an "L2 state dump" and then pass this into the op-node genesis subcommand. -The foundry script can be found at -[packages/contracts-bedrock/scripts/L2Genesis.s.sol](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/scripts/L2Genesis.s.sol). - - -When generating the genesis file, please use the same `op-contracts/vX.Y.Z` release commit used for L1 contract deployments. - - -### Configuration - -Create or modify a file `.json` inside the `deploy-config` -folder in the monorepo. The script will read the latest active fork from the -deploy config and the L2 genesis allocs generated will be compatible with this -fork. The automatically detected fork can be overwritten by setting the -environment variable `FORK` either to the lower-case fork name (currently -`delta`, `ecotone`, or `fjord`) or to `latest`, which will select the latest fork -available (currently `fjord`). - -By default, the script will dump the L2 genesis allocs (aka "state dump") of the detected or -selected fork only, to the file at `STATE_DUMP_PATH`. The optional environment -variable `OUTPUT_MODE` allows you to modify this behavior by setting it to one of -the following values: - -* `latest` (default) - only dump the selected fork's allocs. -* `all` - also dump all intermediary fork's allocs. This only works if - `STATE_DUMP_PATH` is not set. In this case, all allocs will be written to files - `/state-dump-.json`. Another path cannot currently be specified for this - use case. -* `none` - won't dump any allocs. Only makes sense for internal test usage. - -### Creation - -* `CONTRACT_ADDRESSES_PATH` represents the deployment artifact that was - generated during a contract deployment. -* `DEPLOY_CONFIG_PATH` represents a path on the filesystem that points to a - deployment config. The same deploy config JSON file should be used for L1 contracts - deployment as when generating the L2 genesis allocs. -* `STATE_DUMP_PATH` represents the filepath at which the allocs will be - written to on disk. - -```bash -CONTRACT_ADDRESSES_PATH= \ -DEPLOY_CONFIG_PATH= \ -STATE_DUMP_PATH= \ - forge script scripts/L2Genesis.s.sol:L2Genesis \ - --sig 'runWithStateDump()' -``` - -## Subcommand (op-node genesis l2) - -The genesis file creation is handled by the `genesis l2` -subcommand, provided by the `op-node`. The following is an example of its usage -from [v1.7.6](https://github.com/ethereum-optimism/optimism/releases/tag/v1.7.6) -- -note that you need to pass the path to the l2 genesis state dump file output by -the foundry script above: - -```bash -go run cmd/main.go genesis l2 \ - --deploy-config= \ - --l1-deployments= \ - --l2-allocs= \ - --outfile.l2= \ - --outfile.rollup= \ - --l1-rpc=> -``` - -## Next steps - -* Learn how to [initialize](/builders/node-operators/configuration/base-config#initialization-via-genesis-file) - `op-geth` with your `genesis.json` file. -* Learn how to [initialize](https://docs.metall2.com/builders/node-operators/configuration/base-config#configuring-op-node) `op-node` with your `rollup.json` file. -* Learn more about the off chain [architecture](/builders/chain-operators/architecture). diff --git a/pages/builders/chain-operators/deploy/overview.mdx b/pages/builders/chain-operators/deploy/overview.mdx deleted file mode 100644 index 1d3617644..000000000 --- a/pages/builders/chain-operators/deploy/overview.mdx +++ /dev/null @@ -1,121 +0,0 @@ ---- -title: OP Stack deployment overview -lang: en-US -description: Learn about the different components of deploying the OP Stack. ---- - -import { Callout } from 'nextra/components' - -# OP Stack deployment overview - -When deploying an OP Stack chain, you'll be setting up four different -components. It's useful to understand what each of these components does before -you start deploying your chain. The OP Stack can be deployed as a L3, which -includes additional considerations. The following information assumes you're -deploying a L2 Rollup on Ethereum. - -## Smart contracts - -The OP Stack uses a set of smart contracts on the L1 blockchain to manage -aspects of the Rollup. Each OP Stack chain has its own set of L1 smart -contracts that are deployed when the chain is created. - - - Standard OP Stack chains should only use governance approved and audited - smart contracts. The monorepo has them tagged with the following pattern - `op-contracts/vX.X.X` and you can review the release notes for details on the - changes. Read more about the details on our [Smart Contract Release Section](/stack/smart-contracts#official-releases). - - -## Sequencer node - -OP Stack chains use a Sequencer node to gather proposed transactions from users -and publish them to the L1 blockchain. OP Stack chains rely on at least one of -these Sequencer nodes. You can also run additional replica (non-Sequencer) -nodes. - -### Consensus client - -OP Stack Rollup nodes, like Ethereum nodes, have a consensus client. The -consensus client is responsible for determining the list and ordering of blocks -and transactions that are part of your blockchain. Several implementations of -the OP Stack consensus client exist, including `op-node` (maintained by the -Optimism Foundation), [`magi`](https://github.com/a16z/magi) (maintained by -a16z) and [`hildr`](https://github.com/optimism-java/hildr) (maintained by OptimismJ). - -### Execution client - -OP Stack nodes, like Ethereum nodes, also have an execution client. The -execution client is responsible for executing transactions and maintaining the -state of the blockchain. Various implementations of the OP Stack execution -client exist, including `op-geth` (maintained by Optimism Foundation), -[`op-erigon`](https://github.com/testinprod-io/op-erigon) -(maintained by Test in Prod), and [`op-nethermind`](https://docs.nethermind.io/get-started/installing-nethermind/#supported-networks). - -## Batcher - -The Batcher is a service that publishes transactions from the Rollup to the L1 -blockchain. The Batcher runs continuously alongside the Sequencer and publishes -transactions in regular batches. - -## Proposer - -The Proposer is a service responsible for publishing transactions *results* (in -the form of L2 state roots) to the L1 blockchain. This allows smart contracts -on L1 to read the state of the L2, which is necessary for cross-chain -communication and reconciliation between state changes. - -## Software dependencies - -| Dependency | Version | Version Check Command | -| ------------------------------------------------------------- | -------- | --------------------- | -| [git](https://git-scm.com/) | `^2` | `git --version` | -| [go](https://go.dev/) | `^1.21` | `go version` | -| [node](https://nodejs.org/en/) | `^20` | `node --version` | -| [pnpm](https://pnpm.io/installation) | `^8` | `pnpm --version` | -| [foundry](https://github.com/foundry-rs/foundry#installation) | `^0.2.0` | `forge --version` | -| [make](https://linux.die.net/man/1/make) | `^3` | `make --version` | -| [jq](https://github.com/jqlang/jq) | `^1.6` | `jq --version` | -| [direnv](https://direnv.net) | `^2` | `direnv --version` | - -### Notes on specific dependencies - -#### `node` - -We recommend using the latest LTS version of Node.js (currently v20). -[`nvm`](https://github.com/nvm-sh/nvm) is a useful tool that can help you -manage multiple versions of Node.js on your machine. You may experience -unexpected errors on older versions of Node.js. - -#### `foundry` - -It's recommended to use the scripts in the monorepo's `package.json` for -managing `foundry` to ensure you're always working with the correct version. -This approach simplifies the installation, update, and version checking -process. - -#### `direnv` - -Parts of our tutorial use [`direnv`](https://direnv.net) as a way of loading -environment variables from `.envrc` files into your shell. This means you won't -have to manually export environment variables every time you want to use them. -`direnv` only ever has access to files that you explicitly allow it to see. - -After [installing `direnv`](https://direnv.net/docs/installation.html), you -will need to **make sure that [`direnv` is hooked into your shell](https://direnv.net/docs/hook.html)**. -Make sure you've followed [the guide on the `direnv` website](https://direnv.net/docs/hook.html), -then **close your terminal and reopen it** so that the changes take effect (or -`source` your config file if you know how to do that). - - - Make sure that you have correctly hooked `direnv` into your shell by modifying - your shell configuration file (like `~/.bashrc` or `~/.zshrc`). If you haven't - edited a config file then you probably haven't configured `direnv` properly - (and things might not work later). - - -## Next steps - -* Discover how to [deploy the smart contracts](/builders/chain-operators/deploy/smart-contracts). -* Find out how to create your [genesis file](/builders/chain-operators/deploy/genesis). -* Explore some chain operator [best practices](/builders/chain-operators/management/best-practices). diff --git a/pages/builders/chain-operators/deploy/smart-contracts.mdx b/pages/builders/chain-operators/deploy/smart-contracts.mdx deleted file mode 100644 index b7d0d99ac..000000000 --- a/pages/builders/chain-operators/deploy/smart-contracts.mdx +++ /dev/null @@ -1,97 +0,0 @@ ---- -title: OP Stack Smart Contract Deployment -lang: en-US -description: Learn how to deploy the OP Stack L1 smart contracts. ---- - -import { Callout } from 'nextra/components' - -# OP Stack smart contract deployment - - -This page is out of date and shows the legacy method for smart contract deployment. -For the latest recommended method, use [op-deployer](/builders/chain-operators/tools/op-deployer). - - -The following guide shows you how to deploy the OP Stack L1 smart contracts. -The primary development branch is `develop`, however **you should only deploy -official contract releases**. You can visit the see the [smart contract overview](/stack/smart-contracts#official-releases) -for the official release versions. Changes to the smart contracts are -generally not considered backwards compatible. - - - Standard OP Stack chains should use the latest governance approved and audited versions of the smart contract code. - - -## Deployment configuration - -Deploying your OP Stack contracts requires creating a deployment configuration -JSON file. You will create a new deployment configuration file in the following -monorepo subdirectory: [packages/contracts-bedrock/deploy-config](https://github.com/ethereum-optimism/optimism/tree/develop/packages/contracts-bedrock/deploy-config) -For the full set of deployment configuration options and their meanings, you -can see the [rollup deployment configuration page](/builders/chain-operators/configuration/rollup). - -## Deployment script - -The smart contracts are deployed using [foundry](https://github.com/foundry-rs) -and you can find the script's source code in the monorepo at -[packages/contracts-bedrock/scripts/deploy/Deploy.s.sol](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/scripts/deploy/Deploy.s.sol). - -### State diff - -Before deploying the contracts, you can verify the state diff by using the `runWithStateDiff()` function signature in the deployment script, which produces -the outputs inside [`snapshots/state-diff/`](https://github.com/ethereum-optimism/optimism/tree/develop/packages/contracts-bedrock/snapshots/state-diff). -Run the deployment with state diffs by executing: - -```bash -forge script -vvv scripts/deploy/Deploy.s.sol:Deploy --sig 'runWithStateDiff()' --rpc-url $ETH_RPC_URL --broadcast --private-key $PRIVATE_KEY -``` - -### Execution - -* Set the `ETHERSCAN_API_KEY` and add the `--verify` flag to verify your - contracts. -* `DEPLOYMENT_OUTFILE` will determine the filepath that the deployment - artifact is written to on disk after the deployment. It comes in the form of a - JSON file where keys are the names of the contracts and the values are the - addresses the contract was deployed to. -* `DEPLOY_CONFIG_PATH` is the path on the filesystem that points to a deployment - config. The same deployment config JSON file should be used for L1 contracts - deployment as when generating the L2 genesis allocs. See the [deploy-config](https://github.com/ethereum-optimism/optimism/tree/develop/packages/contracts-bedrock/deploy-config) - directory for examples and the [rollup configuration page](/builders/chain-operators/configuration/rollup) - for descriptions of the values. -* `IMPL_SALT` env var can be used to set the create2 salt for deploying the - implementation contracts. - -This will deploy an entire new system of L1 smart contracts, including a new -SuperchainConfig. In the future, there will be an easy way to deploy only -proxies and use shared implementations for each of the contracts as well as a -shared SuperchainConfig contract. - -``` -DEPLOYMENT_OUTFILE=deployments/artifact.json \ -DEPLOY_CONFIG_PATH= \ - forge script scripts/deploy/Deploy.s.sol:Deploy \ - --broadcast --private-key $PRIVATE_KEY \ - --rpc-url $ETH_RPC_URL -``` - -### Deploying a single contract - -All functions for deploying a single contract are public, meaning that -the `--sig` argument to forge script can be used to target the deployment of a -single contract. - -## Best practices - -Production users should deploy their L1 contracts from a contracts release. -All contracts releases are on git tags with the following format: -`op-contracts/vX.Y.Z`. If you're deploying a new standard chain, you should -deploy the [Multi-Chain Prep (MCP) L1 release](https://github.com/ethereum-optimism/optimism/releases/tag/op-contracts%2Fv1.3.0). -We're working on writing more documentation to prepare OP Stack chain operators -to run a fault proof chain effectively. - -## Next steps - -* Learn how to [create your genesis file](/builders/chain-operators/deploy/genesis) -* See all [configuration options](/builders/chain-operators/configuration/rollup) and example configurations diff --git a/pages/builders/chain-operators/features.mdx b/pages/builders/chain-operators/features.mdx deleted file mode 100644 index e2a22d516..000000000 --- a/pages/builders/chain-operators/features.mdx +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Features -lang: en-US -description: >- - Learn about features in the Optimism ecosystem. This guide provides detailed - information and resources about features. ---- - -import { Card, Cards } from 'nextra/components' - -# Features - -This section provides information on various features for chain operators. You'll find guides and overviews to help you understand and work with topics such as running an alternative data availability mode chain, implementing the bridged USDC standard on the OP Stack, running a custom gas token chain, OP Stack preinstalls, and span batches. - - - - - - - - - - - - diff --git a/pages/builders/chain-operators/features/_meta.json b/pages/builders/chain-operators/features/_meta.json deleted file mode 100644 index 4e76c33b5..000000000 --- a/pages/builders/chain-operators/features/_meta.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "preinstalls": "Preinstalls", - "alt-da-mode": "Run an Alt-DA Mode chain", - "custom-gas-token": "Run a custom gas token chain", - "span-batches": "Use and enable span batches on your chain", - "bridged-usdc-standard": "Bridged USDC Standard for the OP Stack" -} \ No newline at end of file diff --git a/pages/builders/chain-operators/features/bridged-usdc-standard.mdx b/pages/builders/chain-operators/features/bridged-usdc-standard.mdx deleted file mode 100644 index 9158881b2..000000000 --- a/pages/builders/chain-operators/features/bridged-usdc-standard.mdx +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: Bridged USDC Standard on OP Stack -lang: en-US -description: This guide explains how chain operators can deploy USDC on their OP Stack chain. ---- - -import { Callout, Steps } from 'nextra/components' - -# Bridged USDC Standard on the OP Stack - -This explainer provides a high-level overview of the Bridged USDC Standard and how chain operators can deploy it. - -## Bridged USDC Standard - -USDC is one of the most bridged assets across the crypto ecosystem, and USDC is often bridged to new chains prior to any action from Circle. This can create a challenge when bridged USDC achieves substantial market share, but native USDC (issued by Circle) is preferred by the ecosystem, leading to fragmentation between multiple versions of USDC. Circle introduced the [Bridged USDC Standard](https://www.circle.com/en/bridged-usdc) to ensure that chain operators can easily deploy a form of bridged USDC that is capable of being upgraded in-place by Circle to native USDC, if and when appropriate, and prevent the fragmentation problem. - -Bridged USDC Standard for the OP Stack allows for an efficient and modular solution for expanding the Bridged USDC Standard across the Superchain ecosystem. - -Chain operators can use the Bridged USDC Standard for the OP Stack to get bridged USDC on their OP Stack chain while also providing the optionality for Circle to seamlessly upgrade bridged USDC to native USDC and retain existing supply, holders, and app integrations. - - - - Chain operators can deploy the Bridged USDC Standard for the OP Stack, providing immediate USDC availability for their users. - Importantly, the Bridged USDC Standard allows for a seamless, in-place upgrade to native USDC if an agreement is later reached between the chain operator and Circle. - - -## Security - -The referenced implementation for the OP Stack has undergone [audits from Spearbit](https://github.com/defi-wonderland/opUSDC/blob/main/audits/spearbit.pdf) and is recommended for production use. - -## Next steps - -* Ready to get started? Read the setup guide for the [Bridged USDC Standard for the OP Stack](https://github.com/defi-wonderland/opUSDC#setup). -* If you experience any problems, please reach out to [developer support](https://github.com/ethereum-optimism/developers/discussions). - -## Bridged USDC Standard Factory Disclaimer - -This software is provided "as is," without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. - -Please review [Circle's disclaimer](https://github.com/circlefin/stablecoin-evm/blob/master/doc/bridged_USDC_standard.md#for-more-information) for the limitations around Circle obtaining ownership of the Bridged USDC Standard token contract. diff --git a/pages/builders/chain-operators/features/custom-gas-token.mdx b/pages/builders/chain-operators/features/custom-gas-token.mdx deleted file mode 100644 index b62405f19..000000000 --- a/pages/builders/chain-operators/features/custom-gas-token.mdx +++ /dev/null @@ -1,161 +0,0 @@ ---- -title: How to run a custom gas token chain -lang: en-US -description: Learn how to run a custom gas token chain. ---- - -import { Callout, Steps } from 'nextra/components' - -# How to Run a custom gas token chain - - -The Custom Gas Token feature is a Beta feature of the MIT licensed OP Stack. While it has received initial review from core contributors, it is still undergoing testing, and may have bugs or other issues. - - -This guide provides a walkthrough for chain operators who want to run a custom gas token chain. See the [Custom Gas Token Explainer](/stack/beta-features/custom-gas-token) for a general overview of this OP Stack feature. -An OP Stack chain that uses the custom gas token feature enables an end user to deposit an L1 native ERC20 token into L2 where that asset is minted as the native L2 asset and can be used to pay for gas on L2. - - - ### Deploying your contracts - - * Checkout the [`v2.0.0-beta.3` of the contracts](https://github.com/ethereum-optimism/optimism/tree/op-contracts/v2.0.0-beta.3) and use the commit to deploy. - - - Be sure to check out this tag or you will not deploy a chain that uses custom gas token! - - - * Update the deploy config in `contracts-bedrock/deploy-config` with new fields: `useCustomGasToken` and `customGasTokenAddress` - - * Set `useCustomGasToken` to `true`. If you set `useCustomGasToken` to `false` (it defaults this way), then it will use ETH as the gas paying token. - - * Set `customGasTokenAddress` to the contract address of an L1 ERC20 token you wish to use as the gas token on your L2. The ERC20 should already be deployed before trying to spin up the custom gas token chain. - - The custom gas token being set must meet the following criteria: - - * must be a valid ERC-20 token - * the number of decimals on the token MUST be exactly 18 - * the name of the token MUST be less than or equal to 32 bytes - * symbol MUST be less than or equal to 32 bytes. - * must not be yield-bearing - * cannot be rebasing or have a transfer fee - * must be transferrable only via a call to the token address itself - * must only be able to set allowance via a call to the token address itself - * must not have a callback on transfer, and more generally a user must not be able to make a transfer to themselves revert - * a user must not be able to make a transfer have unexpected side effects - - - You will NOT be able to change the address of the custom gas token after it is set during deployment. - - - * The [`v2.0.0-beta.3` release](https://github.com/ethereum-optimism/optimism/tree/op-contracts/v2.0.0-beta.3) -enables fee withdrawals to L1 and L2. For more details on these values, see the [Withdrawal Network](/builders/chain-operators/configuration/rollup.mdx#withdrawal-network) -section of the docs. - - * Deploy the L1 contracts from `contracts-bedrock` using the following command: - - ```bash - DEPLOYMENT_OUTFILE=deployments/artifact.json \ - DEPLOY_CONFIG_PATH= \ - forge script scripts/deploy/Deploy.s.sol:Deploy \ - --broadcast --private-key $PRIVATE_KEY \ - --rpc-url $ETH_RPC_URL - ``` - - * `DEPLOYMENT_OUTFILE` is the path to the file at which the L1 contract deployment artifacts are written to after deployment. Foundry has filesystem restrictions for security, so make this file a child of the `deployments` directory. This file will contain key/value pairs of the names and addresses of the deployed contracts. - * `DEPLOY_CONFIG_PATH` is the path to the file for the deploy config used to deploy - - ### Generating L2 Allocs - - - Be sure to use the same tag that you used to deploy the L1 contracts. - - - A forge script is used to generate the L2 genesis. It is a requirement that the L1 contracts have been deployed before generating the L2 genesis, since some L1 contract addresses are embedded into the L2 genesis. - - ```bash - CONTRACT_ADDRESSES_PATH=deployments/artifact.json \ - DEPLOY_CONFIG_PATH= \ - STATE_DUMP_PATH= \ - forge script scripts/L2Genesis.s.sol:L2Genesis \ - --sig 'runWithStateDump()' --chain-id $L2_CHAIN_ID - ``` - - To generate L2 Allocs, it is assumed that: - - * You have the L1 contracts artifact, specified with `DEPLOYMENT_OUTFILE` - * `CONTRACT_ADDRESSES_PATH` is the path to the deployment artifact from deploying the L1 contracts in the first step, aka `DEPLOYMENT_OUTFILE` - * `DEPLOY_CONFIG_PATH` is the path to the file for the deploy config used to deploy - * `STATE_DUMP_PATH` is a path to the generated L2 allocs file, this is the genesis state and is required for the next step. - - ### Generating L2 genesis - - The `op-node` is used to generate the final L2 genesis file. It takes the allocs created by the forge script as input for the `--l2-allocs` flag. - - ```bash - op-node genesis l2 \ - --l1-rpc $ETH_RPC_URL \ - --deploy-config \ - --l2-allocs \ - --l1-deployments \ - --outfile.l2 \ - --outfile.rollup - ``` - - ### Spinning up your infrastructure - - Ensure that the end to end system is running. - - ### Validating your deployment - - This calls the `WETH` predeploy which should be considered wrapped native asset in the context of custom gas token. It should return the name of the token prefixed by `"Wrapped "` - - ```bash - cast call --rpc-url $L2_ETH_RPC_URL 0x4200000000000000000000000000000000000006 'name()(string)' - ``` - - This calls the `L1Block` predeploy should return `true`. - - ```bash - cast call --rpc-url $L2_ETH_RPC_URL 0x4200000000000000000000000000000000000015 'isCustomGasToken()(bool)' - ``` - - This calls the L1 `SystemConfig` contract and should return `true`. - - ```bash - cast call --rpc-url $L1_ETH_RPC_URL 'isCustomGasToken()(bool)' - ``` - - ### Depositing custom gas token into the chain - - * To deposit the custom gas token into the chain, users must use the **`OptimismPortalProxy.depositERC20Transaction`** method - * Users MUST first `approve()` the `OptimismPortal` before they can deposit tokens using `depositERC20Transaction`. - - ``` - function depositERC20Transaction( - address _to, - uint256 _mint, - uint256 _value, - uint64 _gasLimit, - bool _isCreation, - bytes memory _data - ) public; - ``` - - ### Withdrawing custom gas tokens out of the chain - - * To withdraw your native custom gas token from the chain, users must use the **`L2ToL1MessagePasser.initiateWithdrawal`** method. Proving and finalizing withdrawals is identical to the process on chains that use ETH as the native gas token. - - ``` - function initiateWithdrawal( - address _target, - uint256 _gasLimit, - bytes memory _data - ) public payable; - ``` - - -## Next steps - -* Additional questions? See the FAQ section in the [Custom Gas Token Explainer](/stack/beta-features/custom-gas-token#faqs). -* For more detailed info on custom gas tokens, see the [specs](https://specs.optimism.io/experimental/custom-gas-token.html). -* If you experience any problems, please reach out to [developer support](https://github.com/ethereum-optimism/developers/discussions). diff --git a/pages/builders/chain-operators/features/preinstalls.mdx b/pages/builders/chain-operators/features/preinstalls.mdx deleted file mode 100644 index 7c21d6596..000000000 --- a/pages/builders/chain-operators/features/preinstalls.mdx +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Preinstalls -lang: en-US -description: Learn how to use preinstalls on your chain. ---- - -import { Callout, Steps } from 'nextra/components' - -# OP Stack preinstalls - -This guide explains OP Stack preinstalls and what it brings to developers. -To go to production on a new chain, developers need their core contracts: Gnosis Safes, the 4337 entrypoint, create2deployer, etc. On a blank EVM, these contracts can take weeks to be deployed. Now, core contracts come *preinstalled* on the OP Stack -- no third party deployment necessary. -Whether hacking alone or starting the next big rollup, developers can start using their favorite contracts as soon as they spin up their chain. - -Preinstalls place these core smart contracts at their usual addresses in the L2 genesis state, to ensure that they're usable as soon as a chain is initialized. -With these contracts preinstalled at set addresses, developers can also expect all these contracts to be present at set addresses on the Superchain. - - - Preinstalls are automatically enabled for all new OP chains after Ecotone. - - -## Contracts and deployed addresses - -This table lists the specific contracts to be pre/deployed for new OP Chains. - -| Contract | Deployed Address for New OP Chains | -| ----------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| [`Safe`](https://github.com/safe-global/safe-smart-account/blob/v1.3.0/contracts/GnosisSafe.sol) | referencing the [artifacts file](https://github.com/ethereum-optimism/optimism/blob/v1.1.4/op-bindings/artifacts.json): `0x69f4D1788e39c87893C980c06EdF4b7f686e2938` | -| [`SafeL2`](https://github.com/safe-global/safe-smart-account/blob/v1.3.0/contracts/GnosisSafeL2.sol) | `0xfb1bffC9d739B8D520DaF37dF666da4C687191EA` | -| [`Multicall3`](https://github.com/mds1/multicall/tree/main) | `0xcA11bde05977b3631167028862bE2a173976CA11` | -| [`MultiSend`](https://github.com/safe-global/safe-smart-account/blob/v1.3.0/contracts/libraries/MultiSend.sol) | `0x998739BFdAAdde7C933B942a68053933098f9EDa` | -| [`MultiSendCallOnly`](https://github.com/safe-global/safe-smart-account/blob/v1.3.0/contracts/libraries/MultiSendCallOnly.sol) | `0xA1dabEF33b3B82c7814B6D82A79e50F4AC44102B` | -| [create2 Proxy](https://github.com/Arachnid/deterministic-deployment-proxy) | `0x4e59b44847b379578588920cA78FbF26c0B4956C` | -| [`create2deployer`](https://github.com/pcaversaccio/create2deployer) | `0x13b0D85CcB8bf860b6b79AF3029fCA081AE9beF2` | -| [Safe Singleton Factory](https://github.com/safe-global/safe-singleton-factory/blob/main/source/deterministic-deployment-proxy.yul) | `0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7` | -| [`permit2`](https://github.com/Uniswap/permit2) | `0x000000000022D473030F116dDEE9F6B43aC78BA3` | -| [ERC-4337 Entrypoint `v0.6.0`](https://github.com/eth-infinitism/account-abstraction/tree/v0.6.0) | `0x5FF137D4b0FDCD49DcA30c7CF57E578a026d2789`
`SenderCreator` dependency @ `0x7fc98430eaedbb6070b35b39d798725049088348` on ETH mainnet | - -## Resources and next steps - -* Still Have Questions? You can reach us in our [developer support forum](https://github.com/ethereum-optimism/developers/discussions). diff --git a/pages/builders/chain-operators/features/span-batches.mdx b/pages/builders/chain-operators/features/span-batches.mdx deleted file mode 100644 index 250257df5..000000000 --- a/pages/builders/chain-operators/features/span-batches.mdx +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: Span Batches -lang: en-US -description: Learn how to use and enable span batches on your chain. ---- - -import { Callout, Steps } from 'nextra/components' - -# Span batches - -Span batches are an important feature that optimizes batch processing within the chain. This section provides an overview of span batches, instructions on how to enable them, and links to detailed design documents. - -## Overview - -Span batches allow for efficient processing of multiple batches in a single operation, reducing overhead and improving performance. By grouping transactions together, span batches can help optimize the throughput of the network. - -## Enabling span batches - -To enable span batches, follow these steps: - - - 1. **Configuration**: - - * Locate your chain configuration file. - * Add or update the following settings to enable span batches: - - ```yaml - span_batches: - enabled: true - max_batch_size: # Set your desired maximum batch size - batch_interval: # Set your desired batch interval in seconds - ``` - - 2. **Deploy**: - - * After updating the configuration, redeploy your chain node to apply the changes. - - 3. **Verify**: - - * Check the logs to ensure that span batches are enabled and functioning correctly. - * You should see log entries indicating that batches are being processed according to the configured settings. - - -## Links to related pages - -For more detailed information on the design and implementation of span batches, refer to the following resources: - -* [Span Batches Specification](https://specs.optimism.io/protocol/delta/span-batches.html#span-batches) -* [Span Batch Design Docs](https://op-tip.notion.site/Span-Batch-Design-Docs-b85e599a47774dcdb8171cc84cab2476) diff --git a/pages/builders/chain-operators/hacks.mdx b/pages/builders/chain-operators/hacks.mdx deleted file mode 100644 index 5a30e8912..000000000 --- a/pages/builders/chain-operators/hacks.mdx +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: Hacks -lang: en-US -description: >- - Learn about hacks in the Optimism ecosystem. This guide provides detailed - information and resources about hacks. ---- - -import { Card, Cards } from 'nextra/components' - -# Hacks - -This section provides information on various types of hacks related to OP Stack, including data availability, derivation, execution, and settlement. You'll find an overview and introduction to help you understand and work with these topics, as well as featured hacks for practical examples. - - - - - - - - - - - - - - diff --git a/pages/builders/chain-operators/hacks/_meta.json b/pages/builders/chain-operators/hacks/_meta.json deleted file mode 100644 index d4650c2da..000000000 --- a/pages/builders/chain-operators/hacks/_meta.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "overview": "Intro to OP Stack hacks", - "featured-hacks": "Featured hacks", - "data-availability": "Data availability hacks", - "derivation": "Derivation hacks", - "execution": "Execution hacks", - "settlement": "Settlement hacks" -} \ No newline at end of file diff --git a/pages/builders/chain-operators/hacks/data-availability.mdx b/pages/builders/chain-operators/hacks/data-availability.mdx deleted file mode 100644 index 32218c494..000000000 --- a/pages/builders/chain-operators/hacks/data-availability.mdx +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: Data availability hacks -lang: en-US -description: Learn how to modify the default Data Availability Layer module for an OP Stack chain. ---- - -import { Callout } from 'nextra/components' - -# Data availability hacks - - - OP Stack Hacks are explicitly things that you can do with the OP Stack that are *not* currently intended for production use. - - OP Stack Hacks are not for the faint of heart. You will not be able to receive significant developer support for OP Stack Hacks — be prepared to get your hands dirty and to work without support. - - -## Overview - -This guide teaches you how to modify the default Data Availability Layer module for an OP Stack chain. The Data Availability Layer is responsible for the *ordering* and *storage* of the raw input data that forms the backbone of an OP Stack based chain (transactions, state roots, calls from other blockchains, etc.). You can conceptually think of this as an array of inputs — the ordering of this array should remain stable and the contents of this array should remain available. Unstable ordering of inputs will lead to reorgs of the OP Stack chain, while unavailable inputs will cause the OP Stack chain to halt entirely. - -## Default - -The default Data Availability Layer module for an OP Stack chain is the Ethereum DA module. When using the Ethereum DA module, all raw input data is expected to be found on Ethereum. Any data that is accessible on Ethereum can be queried when using this module, including calldata, events, and other block data. - -## Security - -OP Stack based chains are functions of the raw input data found on the Data Availability Layer module(s) used. If a required piece of data is not available, nodes will not be able to properly sync the chain. This also means that these nodes will not be able to dispute any invalid state proposals made to a Settlement Layer module. An OP Stack based chain cannot be safer than the Data Availability module. - -You should be careful to understand the security properties of any Data Availability module(s) that you use. The standard Ethereum DA module generally provides the best security guarantees at the cost of higher transaction fees. Alternative DA modules may be appropriate depending on your particular use-case and risk tolerance. - -## Modding - -### Alternative EVM DA - -A simple modification is to use an EVM-based blockchain other than Ethereum as the Data Availability Layer. Doing so simply requires using an L1 RPC other than Ethereum. - -### EVM-Ordered Alternative DA - -A more involved modification to the Data Availability Layer is an "EVM-Ordered" Alternative DA module. This involves using an EVM-based chain to maintain the *ordering* of transaction data while using a different data storage system to host the underlying data. Generally, ordering is maintained by publishing hashes of the data to the EVM-based chain while publishing the preimages to those hashes to the alternative data source. - -An EVM-Ordered Alternative DA module significantly reduces costs by only publishing hashes and not full input data to the EVM chain. Using an EVM chain for ordering also reduces the number of changes that must be made to the standard Rollup configuration to achieve this result. - -An example of an EVM-Ordered Alternative DA module can be found within [this modification to the OP Stack](https://github.com/celestiaorg/optimism/pull/3) that uses the Celestia blockchain as a third-party data availability provider. - -### Non-EVM DA - -A non-EVM DA module uses a chain not based on the EVM to manage both the ordering and storage of raw input data. Such a modification would require relatively significant modifications to the [derivation portion](https://github.com/ethereum-optimism/optimism/tree/v1.1.4/op-node/rollup/derive) of the `op-node`. No such fully-independent DA modules have been developed yet — be the first! - -### Multiple DA - -It is possible to use multiple Data Availability Layer modules at the same time. For instance, one could source data from two EVM-based chains simultaneously in order to form a bridge between the two chains. When using multiple Data Availability Layer modules, it is imperative to establish a global ordering between the two chains. One option for establishing this ordering is to use the timestamps of blocks from each chain. - -Like a non-EVM DA module, a system with multiple Data Availability modules would need to make significant modifications to the [derivation portion](https://github.com/ethereum-optimism/optimism/tree/v1.1.4/op-node/rollup/derive) of the `op-node`. No such projects have been constructed yet. diff --git a/pages/builders/chain-operators/hacks/derivation.mdx b/pages/builders/chain-operators/hacks/derivation.mdx deleted file mode 100644 index 47651c308..000000000 --- a/pages/builders/chain-operators/hacks/derivation.mdx +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Derivation hacks -lang: en-US -description: Learn how to modify the default Derivation layer module for an OP Stack chain. ---- - -import { Callout } from 'nextra/components' - -# Derivation hacks - - - OP Stack Hacks are explicitly things that you can do with the OP Stack that are *not* currently intended for production use. - - OP Stack Hacks are not for the faint of heart. You will not be able to receive significant developer support for OP Stack Hacks — be prepared to get your hands dirty and to work without support. - - -## Overview - -This guide teaches you how to modify the default Derivation layer module for an OP Stack chain. The Derivation layer is responsible for parsing the raw inputs from the Data Availability layer and converting them into [Engine API](https://github.com/ethereum/execution-apis/tree/main/src/engine) payloads to be sent to the Execution layer. The Derivation Layer is generally tightly coupled to the Data Availability layer because it must understand both the APIs for the Data Availability layer module(s) of choice and the format of the raw data published to the chosen module(s). - -## Default - -The default Derivation layer module is the Rollup module. This module derives transactions from three sources: Sequencer transactions, user deposits, and L1 blocks. The Rollup module also enforces certain ordering properties that, for example, guarantee that user deposits are always included in the L2 chain within a certain configurable amount of time. - -## Security - -Modifying the Derivation layer can have unintended consequences. For example, removing or extending the time window in which user deposits must be included can allow a Sequencer to censor the L2 chain. Because of the flexibility of the Derivation layer, the exact impact of any change is likely to be unique to the specifics of the change. The negative impacts of any modifications should be carefully considered on a case-by-case basis. - -## Modding - -### EVM event-triggered transactions - -The default Rollup configuration of the OP Stack includes "deposited" transactions that are triggered whenever a specific event is emitted by the `OptimismPortal` contract on L1. Using the same principle, an OP Stack chain can derive transactions from events emitted by *any* contract on an EVM-based DA. Refer to [attributes.go](https://github.com/ethereum-optimism/optimism/blob/e468b66efedc5f47f4e04dc1acc803d4db2ce383/op-node/rollup/derive/attributes.go#L70) to understand how deposited transactions are derived and how custom transactions can be created. - -### EVM block-triggered transactions - -Like with events, transactions on an OP Stack chain can be triggered whenever a new block is published on an EVM-based DA. The default Rollup configuration of the OP Stack already includes a block-triggered transaction in the form of [the "L1 info"transaction](https://github.com/ethereum-optimism/optimism/blob/e468b66efedc5f47f4e04dc1acc803d4db2ce383/op-node/rollup/derive/attributes.go#L103) that relays information like the latest block hash, timestamp, and base fee into L2. The Getting Started guide demonstrates the addition of a new block-triggered transaction in the form of a new transaction that reports the amount of gas burned via the base fee on L1. - -### And much, much more… - -The Derivation layer is one of the most flexible layers of the stack. Transactions can be generated from all sorts of raw input data and can be triggered from all sorts of conditions. You can derive transactions from any piece of data that can be found in the Data Availability layer modules! - -[Tutorial: Adding attributes to the derivation function](/builders/chain-operators/tutorials/adding-derivation-attributes). diff --git a/pages/builders/chain-operators/hacks/execution.mdx b/pages/builders/chain-operators/hacks/execution.mdx deleted file mode 100644 index 495ceb8a1..000000000 --- a/pages/builders/chain-operators/hacks/execution.mdx +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Execution hacks -lang: en-US -description: Learn how to modify the default Execution Layer module for an OP Stack chain. ---- - -import { Callout } from 'nextra/components' - -# Execution hacks - - - OP Stack Hacks are explicitly things that you can do with the OP Stack that are *not* currently intended for production use. - - OP Stack Hacks are not for the faint of heart. You will not be able to receive significant developer support for OP Stack Hacks — be prepared to get your hands dirty and to work without support. - - -## Overview - -This guide teaches you how to modify the default Execution Layer module for an OP Stack chain. The Execution Layer is responsible for defining the format of the state and the state transition function on L2. It is expected to trigger the state transition function when it receives a payload via the [Engine API](https://github.com/ethereum/execution-apis/tree/main/src/engine). Although the default Execution Layer module is the EVM, you can replace the EVM with any alternative VM as long as it sits behind the Engine API. - -## Default - -The default Execution Layer module is the Rollup EVM module. The Rollup EVM module utilizes a very lightly modified EVM that adds support for transactions that are triggered by smart contracts on L1 and introduces an L1 data fee to each transaction that accounts for the cost of publishing user transactions to L1. You can find the full set of differences between the standard EVM and the Rollup EVM [on this page](https://op-geth.optimism.io/). - -## Security - -As with modifications to the Derivation Layer, modifications to the Execution Layer can have unintended consequences. For instance, modifications to the EVM may break existing tooling or may open the door to denial of service attacks. Consider the impact of each modification carefully on a case-by-case basis. - -## Modding - -### EVM tweaks - -The default Execution Layer module is the EVM. It's possible to modify the EVM in many different ways like adding new precompiles or inserting predeployed smart contracts into the genesis state. Precompiles can help make common smart contract operations cheaper and can therefore further reduce the cost of execution for your specific use-case. These modifications should be made directly to [the execution client](https://github.com/ethereum-optimism/op-geth). - -It's also possible to create alternative execution client implementations to improve the security properties of your chain. Note that if you modify the EVM, you must apply the same modifications to every execution client that you would like to support. - -### Alternative VMs - -The OP Stack allows you to replace the EVM with *any* state transition function, as long as the transition can be triggered via the Engine API. This has, for example, been used to implement an OP Stack chain that runs a GameBoy emulator rather than the EVM. - -[Tutorial: Adding a precompile](/builders/chain-operators/tutorials/adding-precompiles). diff --git a/pages/builders/chain-operators/hacks/featured-hacks.mdx b/pages/builders/chain-operators/hacks/featured-hacks.mdx deleted file mode 100644 index f844e08eb..000000000 --- a/pages/builders/chain-operators/hacks/featured-hacks.mdx +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: Featured hacks -lang: en-US -description: Learn about some of the customizations stack developers have made for an OP Stack chain. ---- - -import { Callout } from 'nextra/components' - -# Featured hacks - - - OP Stack Hacks are explicitly things that you can do with the OP Stack that are *not* currently intended for production use. - - OP Stack Hacks are not for the faint of heart. You will not be able to receive significant developer support for OP Stack Hacks — be prepared to get your hands dirty and to work without support. - - -## Overview - -Featured Hacks is a compilation of some of the cool stuff people are building on top of the OP Stack! - -## OPCraft - -### Author - -[Lattice](https://lattice.xyz/) - -### Description - -OPCraft was an OP Stack chain that ran a modified EVM as the backend for a fully onchain 3D voxel game built with [MUD](https://mud.dev/). - -### OP Stack configuration - -* Data Availability: Ethereum DA (Goerli) -* Sequencer: Single Sequencer -* Derivation: Standard Rollup -* Execution: Modified Rollup EVM - -### Links - -* [Announcing OPCraft: an Autonomous World built on the OP Stack](https://web.archive.org/web/20231004175307/https://blog.oplabs.co/opcraft-autonomous-world//) -* [OPCraft Explorer](https://opcraft.mud.dev/) -* [OPCraft on GitHub](https://github.com/latticexyz/opcraft) -* [MUD](https://mud.dev/) - -## Ticking Optimism - -### Author - -[@therealbytes](https://twitter.com/therealbytes) - -### Description - -Ticking Optimism is a proof-of-concept implementation of an OP Stack chain that calls a `tick` function every block. By using the OP Stack, Ticking Optimism avoids the need for off-chain infrastructure to execute a function on a regular basis. Ticking Conway is a system that uses Ticking Optimism to build [Conway's Game of Life](https://conwaylife.com/) onchain. - -### OP Stack configuration - -* Data Availability: Ethereum DA (any) -* Sequencer: Single Sequencer -* Derivation: Standard Rollup with custom `tick` function -* Execution: Rollup EVM - -### Links - -* [Ticking Optimism on GitHub](https://github.com/therealbytes/ticking-optimism) -* [Ticking Conway on GitHub](https://github.com/therealbytes/ticking-conway) diff --git a/pages/builders/chain-operators/hacks/overview.mdx b/pages/builders/chain-operators/hacks/overview.mdx deleted file mode 100644 index c4a0f31c0..000000000 --- a/pages/builders/chain-operators/hacks/overview.mdx +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Introduction to OP Stack hacks -lang: en-US -description: Learn general information on how to experiment and customize an OP Stack chain. ---- - -import { Callout } from 'nextra/components' - -# Introduction to OP Stack hacks - -Welcome to OP Stack Hacks, the **highly experimental** region of the OP Stack docs. OP Stack Hacks are an unofficial guide for messing around with the OP Stack. Here you'll find information about ways that the OP Stack can be modified in interesting ways. - -OP Stack Hacks create blockchains that aren't exactly OP Stack, and may be insecure. Hacked OP Stack chains can break key invariants that are required to interoperate with [the Optimism Superchain](/superchain/superchain-explainer). **Developers of chains that wish to interoperate with [the Optimism Superchain](/superchain/superchain-explainer) should *not* include any hacks**. When in doubt, stick with the official components within [the current release of the OP Stack](/stack/getting-started#the-op-stack-today). - - - OP Stack Hacks are explicitly things that you can do with the OP Stack that are *not* currently intended for production use. - - OP Stack Hacks are not for the faint of heart. You will not be able to receive significant developer support for OP Stack Hacks — be prepared to get your hands dirty and to work without support. - - -## OP Stack hack guides - -We have curated a list of guides to walk you through different OP stack modules that you can customize as a developer. - -* [Data Availability Hacks](data-availability) -* [Derivation Hacks](derivation) -* [Execution Hacks](execution) -* [Settlement Hacks](settlement) -* [Featured Hacks](featured-hacks) - -## OP Stack hack tutorials - -We also have a handful of tutorials offering step-by-step instructions on how to make customizations to an OP Stack chain. **As a reminder, you will not be able to receive significant developer support for OP Stack Hacks — be prepared to get your hands dirty and to work without support.** - -* [Adding Attributes to the Derivation Function](../tutorials/adding-derivation-attributes) -* [Adding A Precompile](../tutorials/adding-precompiles) -* [Modifying Predeployed Contracts](../tutorials/modifying-predeploys) - -## Conclusion - -Have an OP Stack hack you'd like to share? Want to write an OP stack hack tutorial? Submit your request in our [docs repo](https://github.com/ethereum-optimism/docs/issues/new?assignees=\&labels=tutorial%2Cdocumentation%2Ccommunity-request\&projects=\&template=suggest_tutorial.yaml\&title=%5BTUTORIAL%5D+Add+PR+title). diff --git a/pages/builders/chain-operators/hacks/settlement.mdx b/pages/builders/chain-operators/hacks/settlement.mdx deleted file mode 100644 index 2b96be231..000000000 --- a/pages/builders/chain-operators/hacks/settlement.mdx +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Settlement hacks -lang: en-US -description: Learn how to modify the default Settlement Layer module for an OP Stack chain. ---- - -import { Callout } from 'nextra/components' - -# Settlement hacks - - - OP Stack Hacks are explicitly things that you can do with the OP Stack that are *not* currently intended for production use. - - OP Stack Hacks are not for the faint of heart. You will not be able to receive significant developer support for OP Stack Hacks — be prepared to get your hands dirty and to work without support. - - -# Overview - -This guide teaches you how to modify the default Settlement Layer module for an OP Stack chain. The Settlement Layer includes modules that are used by third-party chains to establish a *view* of the state of your OP Stack chain. This view can then be used by applications on those chains to make decisions based on the state of your OP Stack chain. Third-party chains can be any other blockchain, including other OP Stack chains. One common Settlement Layer mechanism is a withdrawal system that allows users to send state from your OP Stack chain to the third-party chain. Modifications to this layer typically involve introducing new modules or tweaking the security model of existing modules. - -## Default - -The default Settlement Layer module is currently the Attestation Proof Optimistic Settlement module. This module allows a third-party chain to become aware of the state of an OP Stack chain through an Optimistic protocol where challenges can be executed alongside a threshold of attestations from a pre-defined set of addresses over a state that differs from the proposed state. With a Cannon fault proof shipped to production, this default module can be replaced with a module that allows anyone to challenge proposals by playing the Cannon dispute game. - -## Security - -Modifications to the Settlement Layer can strongly impact the security of common mechanisms like user withdrawals. A decreased withdrawal delay can, for instance, open the door to gas spam attacks that make challenges exceedingly expensive. It is generally not recommended to modify the Settlement Layer unless you know what you're doing. - -## Modding - -### Tweaked parameters - -One simple modification to the Settlement Layer is to tweak the parameters of the default Optimistic state withdrawal mechanism. For example, the withdrawal period can be reduced if a smaller withdrawal period would be sufficient to secure your system. - -### Custom proofs - -Settlement Layer modules use a proof system to verify the correctness of the state of your OP Stack chain as proposed on the third-party chain. In general, these proofs are either Optimistic proofs that require a withdrawal delay or Validity proofs that use a mathematical proof system to assert the validity of the proposal. The current Attestation Proof Optimistic Settlement module could be replaced with a Fault Proof System. - -### Multiple modules - -There is no requirement that a system only have one Settlement Layer module. It is possible to use one or more Settlement Layer modules on one or more third-party chains. A system that aims to bridge state between two chains will likely need to use one Data Availability Layer module and one Settlement Layer module per chain. diff --git a/pages/builders/chain-operators/management.mdx b/pages/builders/chain-operators/management.mdx deleted file mode 100644 index df32978e5..000000000 --- a/pages/builders/chain-operators/management.mdx +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: Management -lang: en-US -description: >- - Learn about management in the Optimism ecosystem. This guide provides detailed - information and resources about management. ---- - -import { Card, Cards } from 'nextra/components' - -# Management - -This section provides information on chain operator best practices, using blobs, managing keys, rollup operations, using snap sync for chain operators, and troubleshooting chain operations. You'll find guides and tutorials to help you understand and work with these topics. - - - - - - - - - - - - - - diff --git a/pages/builders/chain-operators/management/_meta.json b/pages/builders/chain-operators/management/_meta.json deleted file mode 100644 index a33e1fbfb..000000000 --- a/pages/builders/chain-operators/management/_meta.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "blobs": "Using blobs", - "snap-sync": "Using Snap Sync", - "operations": "Node operations", - "key-management": "Key management", - "troubleshooting": "Troubleshooting", - "best-practices": "Best practices" -} diff --git a/pages/builders/chain-operators/management/best-practices.mdx b/pages/builders/chain-operators/management/best-practices.mdx deleted file mode 100644 index d5bcd8ae6..000000000 --- a/pages/builders/chain-operators/management/best-practices.mdx +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: Chain operator best practices -lang: en-US -description: Learn some best practices for managing the OP Stack's off-chain components. ---- - -import { Callout } from 'nextra/components' - -# Chain operator best practices - -The following information has some best practices around running the OP Stack's -off-chain components. - -## Correct release versions - -Chain and node operators should always run the latest production releases of -the OP Stack's off chain components. Our latest releases, notes, and changelogs -can be found on GitHub. `op-node` releases can be found [here](https://github.com/ethereum-optimism/optimism/releases) -and `op-geth` releases can be found [here](https://github.com/ethereum-optimism/op-geth/releases). - -* Production releases are always tags, versioned as - `/v`. For example, an `op-node` release might be - versioned as `op-node/v1.7.5`. -* Tags of the form `v`, such as `v1.7.7`, indicate releases of all - Go code only, and **DO NOT** include smart contracts. -* In the monorepo, this means these `v` releases contain all `op-*` - components, and exclude all `contracts-*` components. -* `op-geth` embeds upstream geth's version inside its own version as follows: - `vMAJOR.GETH_MAJOR GETH_MINOR GETH_PATCH.PATCH`. Basically, geth's version is - our minor version. For example, if geth is at `v1.12.0`, the corresponding - `op-geth` version would be `v1.101200.0`. Note that we pad out to three - characters for the geth minor version and two characters for the geth patch - version. Since we cannot left-pad with zeroes, the geth major version is not - padded. - -## Keep deployment artifacts - -After deploying your contracts on Ethereum, you should keep a record of all the -deployment artifacts: - -* Contract release tag and commit hash -* Contract deployment configuration file. This is the JSON file you created -and passed to the deployment script when you deployed the contracts. -* Contract deployment directory with smart contract artifacts. This is -created in [packages/contracts-bedrock/deployments](https://github.com/ethereum-optimism/optimism/tree/develop/packages/contracts-bedrock/deployments) -* The rollup configuration file that you generated after the contract -deployment -* The genesis file that you generated after the contract deployment - -## Incremental upgrade rollouts - -When upgrading your nodes, take a staggered approach. This means deploying the -upgrade gradually across your infrastructure and ensuring things work as -expected before making changes to every node. - -## Isolate your sequencer - -You can isolate your sequencer node, by not connecting it directly to the -internet. Instead, you could handle your ingress traffic behind a proxy. Have -the proxy forward traffic to replicas and have them gossip the transactions -internally. - -## Improve reliability of peer-to-peer transactions - -These flags can improve the reliability of peer-to-peer transactions from internal replica nodes and the sequencer node. - -For sequencer nodes: - -``` -GETH_TXPOOL_JOURNAL: "" -GETH_TXPOOL_JOURNALREMOTES: "false" -GETH_TXPOOL_NOLOCALS: "true" -``` - -For replica nodes: - -``` -GETH_TXPOOL_JOURNALREMOTES: "true" -GETH_TXPOOL_LIFETIME: "1h" -GETH_TXPOOL_NOLOCALS: "true" -``` - -For additional information about these flags, check out our [Execution Layer Configuration Options](/builders/node-operators/configuration/execution-config) doc. - -## Write your own runbooks - -Create custom runbooks to prepare for operating an OP Stack chain. For a deeper understanding of daily operations and best practices, explore the public [OP Mainnet Runbooks](https://oplabs.notion.site/OP-Mainnet-Runbooks-120f153ee1628045b230d5cd3df79f63) to see how these practices could be applied to your own chain. - -## Assumptions - -### op-proposer assumes archive mode - -The `op-proposer` currently assumes that `op-geth` is being run in archive -mode. This will likely be updated in a future network upgrade, but it is -necessary for L2 withdrawals at the moment. diff --git a/pages/builders/chain-operators/management/blobs.mdx b/pages/builders/chain-operators/management/blobs.mdx deleted file mode 100644 index 605e1b296..000000000 --- a/pages/builders/chain-operators/management/blobs.mdx +++ /dev/null @@ -1,137 +0,0 @@ ---- -title: Using blobs -lang: en-US -description: Learn how to switch to using blobs for your chain. ---- - -import { Callout, Steps } from 'nextra/components' -import { Tabs } from 'nextra/components' - -# Using Blobs - -This guide walks you through how to switch to using blobs for your chain after Ecotone is activated. - - - This guide is intended for chains already upgraded to Ecotone. - - -## Switch to using blobs - - - ### Determine scalar values for using blobs - - The first step to switching to submit data with Blobs is to calculate the - scalar values you wish to set for the formula to charge users fees. - To determine the scalar values to use for your chain, you can utilize this [fee parameter calculator](https://docs.google.com/spreadsheets/d/1V3CWpeUzXv5Iopw8lBSS8tWoSzyR4PDDwV9cu2kKOrs/edit) - to get a better estimate for scalar values on your chain. Input the average transaction per day your chain is processing, the types of transactions that occur on your chain, the [`OP_BATCHER_MAX_CHANNEL_DURATION`](/builders/chain-operators/configuration/batcher#setting-your--op_batcher_max_channel_duration) you have parameterized on your `op-batcher`, and the target margin you wish to charge users on top of your L1 costs. The following - information is tuned to a network like OP Mainnet. - For more details on fee scalar, see [Transaction Fees, Ecotone section](/stack/transactions/fees#ecotone). - - #### Adjust fees to change margins - - As a chain operator, you may want to scale your scalar values up or down either because the throughput of your chain has changed and you are either filling significantly more or less of blobs, or because you wish to simply increase your margin to cover operational expenses. - So, to increase or decrease your margin on L1 data costs, you would simply scale both the `l1baseFeeScalar` and the `l1blobBaseFeeScalar` by the same multiple. - - For example, if you wished to increase your margin on L1 data costs by \~10%, you would do: - - ``` - newBaseFeeScalar= prevBaseFeeScalar * 1.1 - newBlobBaseFeeScalar = prevBlobBaseFeeScalar * 1.1 - ``` - - ### Update your scalar values for blobs - - Once you have determined your ideal `BaseFeeScalar` and `BlobBaseFeeScalar`, you will need to apply those values for your chain. The first step is to encode both values into a single value to be set in your L1 Config: - - You can set your Scalar Values to send transaction to the L1 SystemConfigProxy.setGasConfigEcotone - - ```bash - cast send \ - --private-key $GS_ADMIN_PRIVATE_KEY \ - --rpc-url $ETH_RPC_URL \ - \ - "setGasConfigEcotone(uint32,uint32)" \ - - ``` - - Check that the gas price oracle on L2 returns the expected values for `baseFeeScalar` and `blobBaseFeeScalar` (wait \~1 minute): - - - This is checked on L2, so ensure you are using an RPC URL for your chain. You'll also need to provide a `gas-price` to geth when making this call. - - - - - ```shell - cast call 0x420000000000000000000000000000000000000F 'baseFeeScalar()(uint256)' --rpc-url YOUR_L2_RPC_URL - ``` - - - - ```shell - cast call 0x420000000000000000000000000000000000000F 'blobBaseFeeScalar()(uint256)' --rpc-url YOUR_L2_RPC_URL - ``` - - - - ### Update your batcher to post blobs - - Now that the fee config has been updated, you should immediately configure your batcher! - - - Your chain may be undercharging users during the time between updating the scalar values and updating the Batcher, so aim to do this immediately after. - - - Steps to configure the batcher: - - * Configure `OP_BATCHER_DATA_AVAILABILITY_TYPE=blobs`. The batcher will have to be restarted for it to take effect. - * Ensure your `OP_BATCHER_MAX_CHANNEL_DURATION` is properly set to maximize your fee savings. See [OP Batcher Max Channel Configuration](/builders/chain-operators/configuration/batcher#set-your--op_batcher_max_channel_duration) for more details. - * Optionally, you can configure your batcher to support multi-blobs. See [Multi-Blob Batcher Configuration](/builders/chain-operators/configuration/batcher#configure-your-multi-blob-batcher) for more details. - - -## Switch back to using calldata - -As a chain operator, if the `blobBaseFee` is expensive enough and your chain is -not processing enough transactions to meaningfully fill blobs within your -configured batcher `OP_BATCHER_MAX_CHANNEL_DURATION`, you may wish to switch -back to posting data to calldata. Utilize the [fee parameter calculator](https://docs.google.com/spreadsheets/d/12VIiXHaVECG2RUunDSVJpn67IQp9NHFJqUsma2PndpE/edit) to inform whether your transactions will be cheaper if submitting blobs or if submitting calldata. Chains can follow these steps to switch from -blobs back to using calldata. - - - ### Determine your scalar values for using calldata - - If you are using calldata, then you can set your `BaseFeeScalar` similarly to - how you would have set "scalar" prior to Ecotone, though with a 5-10% bump to - compensate for the removal of the "overhead" component. - You can utilize this [fee parameter calculator](https://docs.google.com/spreadsheets/d/12VIiXHaVECG2RUunDSVJpn67IQp9NHFJqUsma2PndpE/edit) - to get a better estimate for scalar values on your chain. The following - information is tuned to a network like OP Mainnet. - - Chains can update their fees to increase or decrease their margin. If using calldata, then `BaseFeeScalar` should be scaled to achieve the desired margin. - For example, to increase your L1 Fee margin by 10%: - - ``` - BaseFeeScalar = BaseFeeScalar * 1.1 - BlobBaseFeeScalar = 0 - ``` - - ### Update your scalar values for using calldata - - To set your scalar values, follow the same process as laid out in [Update your Scalar values for Blobs](#update-your-scalar-values-for-blobs). - - ### Update your batcher to post calldata - - Now that the fee config has been updated, you will want to immediately configure your batcher. - - - Reminder, that your chain may be undercharging users during the time between updating the scalar values and updating the Batcher, so aim to do this immediately after. - - - * Configure `OP_BATCHER_DATA_AVAILABILITY_TYPE=calldata`. The batcher will have to be restarted for it to take effect. - * Ensure your `OP_BATCHER_MAX_CHANNEL_DURATION` is properly set to maximize savings. **NOTE:** While setting a high value here will lower costs, it will be less meaningful than for low throughput chains using blobs. See [OP Batcher Max Channel Configuration](/builders/chain-operators/configuration/batcher#set-your--op_batcher_max_channel_duration) for more details. - - -## Other considerations - -* For information on L1 Data Fee changes related to the Ecotone upgrade, visit the [Transaction Fees page](/stack/transactions/fees#ecotone). -* If you want to enable archive nodes, you will need to access a blob archiver service. You can use [Optimism's](/builders/node-operators/management/snapshots#op-mainnet-archive-node) or [run your own](/builders/chain-operators/tools/explorer#create-an-archive-node). diff --git a/pages/builders/chain-operators/management/key-management.mdx b/pages/builders/chain-operators/management/key-management.mdx deleted file mode 100644 index 0fb1a031d..000000000 --- a/pages/builders/chain-operators/management/key-management.mdx +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: Key management -lang: en-US -description: A guide for chain operators on managing private keys on their chain, covering hot and cold wallets, and the use of an HSM. ---- - -import { Callout } from 'nextra/components' - -# Managing your keys - -This guide informs chain operators on important key management considerations. -There are certain [privileged roles](/chain/security/privileged-roles) that -need careful consideration. The privileged roles are categorized as hot wallets -or cold wallets. - -## Hot wallets - -The addresses for the `Batcher` and the `Proposer` need to have their private -keys online somewhere for a component of the system to work. If these addresses -are compromised, the system can be exploited. - -It is up to the chain operator to make the decision on how they want to manage -these keys. One suggestion is to use a Hardware Security Module (HSM) to provide -a safer environment for key management. Cloud providers oftentimes provide -Key Management Systems (KMS) that can work with your developer operations -configurations. This can be used in conjunction with the `eth_signTransaction` -RPC method. - - - You can take a look at the signer client [source code](https://github.com/ethereum-optimism/optimism/blob/develop/op-service/signer/client.go) - if you're interested in whats happening under the hood. - - -## Cold wallets - -The addresses for the cold wallets cannot be used without human intervention. -These can be set up as multisig contracts, so they can be controlled by groups -of community members and avoid a single point of failure. The signers behind a -multisig should probably also use a hardware wallet. - - - Refer to the [privileged roles](/chain/security/privileged-roles) documentation - for more information about these different addresses and their security concerns. - diff --git a/pages/builders/chain-operators/management/operations.mdx b/pages/builders/chain-operators/management/operations.mdx deleted file mode 100644 index b85102f96..000000000 --- a/pages/builders/chain-operators/management/operations.mdx +++ /dev/null @@ -1,168 +0,0 @@ ---- -title: Rollup operations -lang: en-US -description: Learn basics of rollup operations, such as how to start and stop your rollup, get your rollup config, and how to add nodes. ---- - -import { Callout, Steps } from 'nextra/components' - -# Rollup operations -This guide reviews the basics of rollup operations, such as how to start your rollup, stop your rollup, get your rollup config, and add nodes. - -## Stopping your rollup - -An orderly shutdown is done in the reverse order to the order in which components were started: - -### To stop the batcher, use this command: - - ```sh - curl -d '{"id":0,"jsonrpc":"2.0","method":"admin_stopBatcher","params":[]}' \ - -H "Content-Type: application/json" http://localhost:8548 | jq - ``` - - This way the batcher knows to save any data it has cached to L1. - Wait until you see `Batch Submitter stopped` in batcher's output before you stop the process. - -### Stop `op-node` - This component is stateless, so you can just stop the process. - -### Stop `op-geth` - Make sure you use **CTRL-C** to avoid database corruption. If Geth stops unexpectedly the database can be corrupted. This is known as an "[unclean shutdown](https://geth.ethereum.org/docs/fundamentals/databases#unclean-shutdowns)" and it can lead to a variety of problems for the node when it is restarted. - - -## Starting your rollup - -To restart the blockchain, use the same order of components you did when you initialized it. - -### Start `op-geth` -### Start `op-node` -### Start `op-batcher` - - If `op-batcher` is still running and you just stopped it using RPC, you can start it with this command: - - ```sh - curl -d '{"id":0,"jsonrpc":"2.0","method":"admin_startBatcher","params":[]}' \ - -H "Content-Type: application/json" http://localhost:8548 | jq - ``` - - -Synchronization takes time - -`op-batcher` might have warning messages similar to: - -``` -WARN [03-21|14:13:55.248] Error calculating L2 block range err="failed to get sync status: Post \"http://localhost:8547\": context deadline exceeded" -WARN [03-21|14:13:57.328] Error calculating L2 block range err="failed to get sync status: Post \"http://localhost:8547\": context deadline exceeded" -``` - -This means that `op-node` is not yet synchronized up to the present time. -Just wait until it is. - - - -## Getting your rollup config - -Use this tool to get your rollup config from `op-node`. This will only work if your chain is **already** in the [superchain-registry](https://github.com/ethereum-optimism/superchain-registry/blob/main/chainList.json) and `op-node` has been updated to pull those changes in from the registry. - - -This script will NOT work for chain operators trying to generate this data in order to submit it to the registry. - - - -### Get your rollup config from `op-node` - -You'll need to run this tool: - -``` -./bin/op-node networks dump-rollup-config --network=sepolia -{ - "genesis": { - "l1": { - "hash": "0x48f520cf4ddaf34c8336e6e490632ea3cf1e5e93b0b2bc6e917557e31845371b", - "number": 4071408 - }, - "l2": { - "hash": "0x102de6ffb001480cc9b8b548fd05c34cd4f46ae4aa91759393db90ea0409887d", - "number": 0 - }, - "l2_time": 1691802540, - "system_config": { - "batcherAddr": "0x8f23bb38f531600e5d8fddaaec41f13fab46e98c", - "overhead": "0x00000000000000000000000000000000000000000000000000000000000000bc", - "scalar": "0x00000000000000000000000000000000000000000000000000000000000a6fe0", - "gasLimit": 30000000 - } - }, - "block_time": 2, - "max_sequencer_drift": 600, - "seq_window_size": 3600, - "channel_timeout": 300, - "l1_chain_id": 11155111, - "l2_chain_id": 11155420, - "regolith_time": 0, - "canyon_time": 1699981200, - "delta_time": 1703203200, - "ecotone_time": 1708534800, - "batch_inbox_address": "0xff00000000000000000000000000000011155420", - "deposit_contract_address": "0x16fc5058f25648194471939df75cf27a2fdc48bc", - "l1_system_config_address": "0x034edd2a225f7f429a63e0f1d2084b9e0a93b538", - "protocol_versions_address": "0x79add5713b383daa0a138d3c4780c7a1804a8090", - "da_challenge_address": "0x0000000000000000000000000000000000000000", - "da_challenge_window": 0, - "da_resolve_window": 0, - "use_plasma": false -} -``` -### Check the flags -Ensure that you are using the appropriate flag. -The `--network=sepolia` flag allows the tool to pick up the appropriate data from the registry, and uses the OPChains mapping under the hood. - - - -## Adding nodes - -To add nodes to the rollup, you need to initialize `op-node` and `op-geth`, similar to what you did for the first node. -You should *not* add an `op-batcher` because there should be only one. - -### Configure the OS and prerequisites as you did for the first node - -### Build the Optimism monorepo and `op-geth` as you did for the first node - -### Copy from the first node these files: - - ```bash - ~/op-geth/genesis.json - ~/optimism/op-node/rollup.json - ``` - -### Create a new `jwt.txt` file as a shared secret: - - ```bash - cd ~/op-geth - openssl rand -hex 32 > jwt.txt - cp jwt.txt ~/optimism/op-node - ``` - -### Initialize the new op-geth: - - ```bash - cd ~/op-geth - ./build/bin/geth init --datadir=./datadir ./genesis.json - ``` - -### Turn on peer to peer synchronization to enable L2 nodes to synchronize directly -If you do it this way, you won't have to wait until the transactions are written to L1. - If you already have peer to peer synchronization, add the new node to the `--p2p.static` list so it can synchronize. - -### Start `op-geth` (using the same command line you used on the initial node) - -**Important:** Make sure to configure the `--rollup.sequencerhttp` flag to point to your sequencer node. This HTTP endpoint is crucial because `op-geth` will route `eth_sendRawTransaction` calls to this URL. The OP Stack does not currently have a public mempool, so configuring this is required if you want your node to support transaction submission. - - -### Start `op-node` (using the same command line you used on the initial node) - - -## Next steps - -* See the [Node Configuration](/builders/node-operators/configuration/base-config) guide for additional explanation or customization. -* If you experience difficulty at any stage of this process, please reach out to [developer support](https://github.com/ethereum-optimism/developers/discussions). diff --git a/pages/builders/chain-operators/management/snap-sync.mdx b/pages/builders/chain-operators/management/snap-sync.mdx deleted file mode 100644 index 97e372754..000000000 --- a/pages/builders/chain-operators/management/snap-sync.mdx +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Using snap sync for chain operators -lang: en-US -description: Learn how to use and enable snap sync on your OP chain. ---- - -import { Callout, Steps } from 'nextra/components' - -# Using snap sync for chain operators - -This guide reviews the optional feature of snap sync for OP chains, including benefits and how to enable the feature. - -Snap sync significantly improves the experience of syncing an OP Stack node. Snap sync is a native feature of go-ethereum that is now optionally enabled on `op-node` & `op-geth`. -Snap sync works by downloading a snapshot of the state from other nodes on the network and is then able to start executing blocks from the completed state rather than having to re-execute every single block. -This means that performing a snap sync is significantly faster than performing a full sync. - -* Snap sync enables node operators on your network to sync faster. -* Snap sync removes the need for nodes on your post Ecotone network to run a [blob archiver](/builders/node-operators/management/blobs). - -## Enable snap sync for chains - -To enable snap sync, chain operators need to spin up a node which is exposed to the network and has transaction gossip disabled. - - -For snap sync, all `op-geth` nodes should expose port `30303` TCP and `30303` UDP to easily find other op-geth nodes to sync from. - * If you set the port with [`--discovery.port`](/builders/node-operators/configuration/execution-config#discoveryport), then you must open the port specified for UDP. - * If you set [`--port`](/builders/node-operators/configuration/execution-config#port), then you must open the port specified for TCP. - * The only exception is for sequencers and transaction ingress nodes. - - - - ### Setup a snap sync node - - * Expose port `30303` (`op-geth`'s default discovery port) to the internet on TCP and UDP. - * Disable transaction gossip with the [`--rollup.disabletxpoolgossip`](/builders/node-operators/configuration/execution-config#rollupdisabletxpoolgossip) flag - - ### Enable snap sync on your network - - * Follow the [Node operator snap sync guide](/builders/node-operators/management/snap-sync#enable-snap-sync-for-your-node) to enable snap sync for your chain network. - - -## Next Steps - -* See the [Node configuration](/builders/node-operators/configuration/base-config#configuration) guide for additional explanation or customization. -* If you experience difficulty at any stage of this process, please reach out to [developer support](https://github.com/ethereum-optimism/developers/discussions). diff --git a/pages/builders/chain-operators/management/troubleshooting.mdx b/pages/builders/chain-operators/management/troubleshooting.mdx deleted file mode 100644 index 05a0de551..000000000 --- a/pages/builders/chain-operators/management/troubleshooting.mdx +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: Troubleshooting chain operations -lang: en-US -description: Learn solutions to common problems when troubleshooting chain operations. ---- - -# Troubleshooting: chain operations - -This page lists common troubleshooting scenarios and solutions for chain operators. - -## EvmError in contract deployment - -L1 smart contract deployment fails with the following error: - -```text -EvmError: Revert -``` - -### Solution - -The OP Stack uses deterministic smart contract deployments to guarantee that all contract addresses can be computed ahead of time based on a "salt" value that is provided at deployment time. -Each OP Stack chain must have a unique salt value to ensure that the contract addresses do not collide with other OP Stack chains. - -You can avoid this error by changing the salt used when deploying the L1 smart contracts. -The salt value is set by the `IMPL_SALT` environment variable when deploying the contracts. -The `IMPL_SALT` value must be a 32 byte hex string. - -You can generate a random salt value using the following command: - -```bash -export IMPL_SALT=$(openssl rand -hex 32) -``` - -## Failed to find the L2 Heads to start from - -`op-node` fails to execute the derivation process with the following error: - -```text -WARN [02-16|21:22:02.868] Derivation process temporary error attempts=14 err="stage 0 failed resetting: temp: failed to find the L2 Heads to start from: failed to fetch L2 block by hash 0x0000000000000000000000000000000000000000000000000000000000000000: failed to determine block-hash of hash 0x0000000000000000000000000000000000000000000000000000000000000000, could not get payload: not found" -``` - -### Solution - -This error can occur when the data directory for `op-geth` becomes corrupted (for example, as a result of a computer crash). -You will need to reinitialize the data directory. - -If you are following the tutorial for [Creating Your Own L2 Rollup](../tutorials/create-l2-rollup), make sure to rerun the commands within the [Initialize `op-geth`](../tutorials/create-l2-rollup#initialize-op-geth) section. - -If you are not following the tutorial, make sure to take the following steps: - -1. Stop `op-node` and `op-geth`. -2. Delete the corresponding `op-geth` data directory. -3. If running a Sequencer node, import the Sequencer key into the `op-geth` keychain. -4. Reinitialize `op-geth` with the `genesis.json` file. -5. Restart `op-geth` and `op-node`. - -## Batcher unable to publish transaction - -`op-batcher` fails to publish transactions with the following error: - -```text -INFO [03-21|14:22:32.754] publishing transaction service=batcher txHash=2ace6d..7eb248 nonce=2516 gasTipCap=2,340,741 gasFeeCap=172,028,434,515 -ERROR[03-21|14:22:32.844] unable to publish transaction service=batcher txHash=2ace6d..7eb248 nonce=2516 gasTipCap=2,340,741 gasFeeCap=172,028,434,515 err="insufficient funds for gas * price + value" -``` - -### Solution - -You will observe this error if the `op-batcher` runs out of ETH to publish transactions to L1. -This problem can be resolved by sending additional ETH to the `op-batcher` address. diff --git a/pages/builders/chain-operators/self-hosted.mdx b/pages/builders/chain-operators/self-hosted.mdx deleted file mode 100644 index b2d9d2424..000000000 --- a/pages/builders/chain-operators/self-hosted.mdx +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: How to start a self-hosted chain -lang: en-US -description: Learn how to start a self-hosted OP Chain with standard configuration. ---- - -import { Callout, Steps } from 'nextra/components' - -# How to start a self-hosted chain - -This guide provides an overview of how to start a self-hosted OP Chain with standard configuration. It walks you through how to build, configure, test, and launch your OP Chain. To skip ahead to custom features or settings, you can explore the [chain operator tutorials](#chain-operator-tutorials). - -## Build your chain - -There are two main steps to get started building your own self-hosted OP Chain: learn fundamental components of OP chains and spin up an OP Stack testnet chain. - - - {

Learn Fundamental Components of OP Chains

} - - To work with OP Chains, you'll need to understand the fundamental components of OP Chains. - - * **Chain Architecture**: OP Chains use execution and consensus clients as well as the OP Stack's privileged roles. For more details, see the [Chain Architecture](/builders/chain-operators/architecture) guide. - * **Smart Contracts**: OP Chains use several smart contracts on the L1 - blockchain to manage aspects of the Rollup. Each OP Stack chain has its own - set of [L1 smart contracts](/stack/smart-contracts#layer-1-contracts), - [L2 predeploy contracts](/stack/smart-contracts#layer-2-contracts-predeploys), - and [L2 preinstall contracts](/builders/chain-operators/features/preinstalls) - that are deployed when the chain is created. - * **Preinstalls**: OP Chains come with [preinstalled core contracts](/builders/chain-operators/features/preinstalls), making them usable as soon as a chain is initialized on the OP Stack. - - - You should only use governance approved and audited smart contracts. The monorepo has them tagged with the following pattern `op-contracts/vX.X.X` and you can review the release notes for details on the changes. - - - {

Launch Your OP Stack Testnet Chain

} - - * Now, you are ready to spin up your testnet chain. - * Just follow the [Creating Your Own L2 Rollup Testnet](/builders/chain-operators/tutorials/create-l2-rollup) tutorial to get started. -
- -## Configure your chain - -OP Chains can be configured for throughput, cost, and other decentralization tradeoffs. The following steps are intended for standard configuration of OP Chains. - - - {

Setup Key Management and Privileged Roles

} - - * Configure hot wallets and cold wallets using the guide for [Managing Your Keys](/builders/chain-operators/management/key-management). - * Refer to the [Privileged Roles](/chain/security/privileged-roles) guide for detailed security information. - - {

Make Standard Chain Configurations

} - - * Configure your [OP Chain parameters](/builders/chain-operators/configuration/overview) based on your particular tradeoffs. You'll need to configure the **rollup**, **batcher**, and **proposer** for optimal performance. - * Update your batcher to [post transaction data within blobs](/builders/chain-operators/management/blobs) instead of call data to maximize your fee savings. - * Enable [snap sync](/builders/chain-operators/management/snap-sync) on your OP Chain to significantly improve the experience and speed of syncing an OP Stack node. - - {

Set Public RPC Endpoint

} - - * Set the [public RPC Endpoint](/builders/chain-operators/architecture#ingress-traffic), so your OP Chain can handle large volumes of RPC requests from your users. - - {

Enable Analytics for Onchain Data

} - - * Enable [analytics tracking for your OP Chain](/builders/node-operators/management/metrics), to immediately generate onchain metrics after mainnet launch. -
- -## Test your chain - -Before launching on Mainnet, thoroughly test and debug OP Chain contracts, features, and security. Here are your options. - - - {

Use a Block Explorer

} - - Block explorers allow you to access transaction history and conduct chain debugging. - - * Option 1: Select an [external block explorer](/builders/tools/build/block-explorers) to use with your OP Chain. - * Option 2: Deploy your own block explorer for your OP Chain, such as [Blockscout](/builders/chain-operators/tools/explorer). - - {

Send Test Transactions

} - - As part of testing your OP Chain, you'll need to send test or example transactions to the new network. - - * Test [sending L2 transactions](https://github.com/ethereum-optimism/tx-overload) to understand how much load your new chain can handle. - * Trace [deposits and withdrawals](/builders/app-developers/tutorials/sdk-trace-txns) using the SDK or viem. - * Run [basic transaction tests](https://metamask.io/) using Metamask. -
- -## Launch your chain on Mainnet - -After testing is complete, you are ready to launch your OP Chain on Mainnet. Optionally, you can also request [launch support](https://share.hsforms.com/1yENj8CV9TzGYBASD0JC8_gqoshb) and subscribe to [receive chain upgrade notifications](https://github.com/ethereum-optimism/developers/discussions/categories/announcements). - -## Chain operator tutorials - -Here's a curated collection of chain operator tutorials put together by the Optimism community. -They'll help you get a head start deploying your first OP Stack chain. - -| Tutorial Name | Description | Difficulty Level | -| -------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | ---------------- | -| [Creating Your Own L2 Rollup](tutorials/create-l2-rollup) | Learn how to spin up your own OP Stack testnet chain | 🟡 Medium | -| [Adding Attributes to the Derivation Function](tutorials/adding-derivation-attributes) | Learn how to modify the derivation function for an OP Stack chain to track the amount of ETH being burned on L1. | 🟢 Easy | -| [Adding a Precompile](tutorials/adding-precompiles) | Learn how to run an EVM with a new precompile for OP Stack chain operations to speed up calculations that are not currently supported. | 🟢 Easy | -| [Modifying Predeployed Contracts](tutorials/modifying-predeploys) | Learn how to modify predeployed contracts for an OP Stack chain by upgrading the proxy. | 🟢 Easy | -| [Pause and Unpause the Bridge](/stack/security/pause) | Learn how to pause `OptimismPortal` as a backup safety mechanism on your OP Stack chain. | 🟢 Easy | -| [Integrating a DA Layer](tutorials/integrating-da-layer) | Learn how to integrate a new DA Layer with Alt-DA | 🟢 Easy | - -You can also [suggest a new tutorial](https://github.com/ethereum-optimism/docs/issues/new?assignees=\&labels=tutorial%2Cdocumentation%2Ccommunity-request\&projects=\&template=suggest_tutorial.yaml\&title=%5BTUTORIAL%5D+Add+PR+title) if you have something specific in mind. We'd love to grow this list! - -## Next steps - -* After deploying your chain, check the [Rollup Operations](./management/operations) guide for common operations you'll need to run with your rollup. -* If you run into any problems, please visit the [Chain Troubleshooting Guide](./management/troubleshooting) for help. diff --git a/pages/builders/chain-operators/tools.mdx b/pages/builders/chain-operators/tools.mdx deleted file mode 100644 index c8018d508..000000000 --- a/pages/builders/chain-operators/tools.mdx +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: Tools -lang: en-US -description: >- - Learn about tools in the Optimism ecosystem. This guide provides detailed - information and resources about tools. ---- - -import { Card, Cards } from 'nextra/components' - -# Tools - -This section provides information on chain monitoring options, deploying a block explorer, configuring a challenger for your chain, conductor, and deployer. You'll find guides, overviews, and tools to help you understand and work with these topics. - - - - - - - - - - - - - - - - diff --git a/pages/builders/chain-operators/tools/_meta.json b/pages/builders/chain-operators/tools/_meta.json deleted file mode 100644 index 990ea87a2..000000000 --- a/pages/builders/chain-operators/tools/_meta.json +++ /dev/null @@ -1,9 +0,0 @@ -{ - "chain-monitoring": "Chain monitoring", - "explorer": "Block explorer", - "op-challenger": "op-challenger", - "op-conductor": "op-conductor", - "op-deployer": "op-deployer", - "op-txproxy": "op-txproxy", - "proxyd": "proxyd" -} diff --git a/pages/builders/chain-operators/tools/chain-monitoring.mdx b/pages/builders/chain-operators/tools/chain-monitoring.mdx deleted file mode 100644 index 14da3c81b..000000000 --- a/pages/builders/chain-operators/tools/chain-monitoring.mdx +++ /dev/null @@ -1,128 +0,0 @@ ---- -title: Chain monitoring options -lang: en-US -description: Learn about onchain and offchain monitoring options for your OP Stack chain. ---- - -import { Callout } from 'nextra/components' - -# Chain monitoring options - -This explainer covers the basics of onchain and offchain monitoring options for your OP Stack chain. Onchain monitoring services allow chain operators to monitor the overall system and onchain events. -Offchain monitoring lets chain operators to monitor the operation and behavior of nodes and other offchain components. - -## Onchain monitoring services - -Onchain monitoring services provide insights into the overall system, helping chain operators track and monitor on-chain events. Some examples of onchain monitoring services include `monitorism` and `dispute-mon`. - -### `monitorism` - -Monitorism is a tooling suite that supports monitoring and active remediation actions for the OP Stack chain. Monitorism uses monitors as passive security providing automated monitoring for the OP Stack. They are used to monitor the OP stack and alert on specific events that could be a sign of a security incident. - -Currently, the list of monitors includes: - -Security integrity monitors: These are monitors necessary for making sure Bridges between L2 and L1 are safe and work as expected. These monitors are divided in two subgroups: - -* Pre-Faultproof Chain Monitors: - * Fault Monitor: checks for changes in output roots posted to the L2OutputOracle contract. When a change is detected, it reconstructs the output root from a trusted L2 source and looks for a match. - * Withdrawals Monitor: checks for new withdrawals that have been proven to the OptimismPortal contract. Each withdrawal is checked against the `L2ToL1MessagePasser` contract. -* Faultproof chain monitors: - * Faultproof Withdrawal: The Faultproof Withdrawal component monitors `ProvenWithdrawals` events on the `OptimismPortal` contract and performs checks to detect any violations of invariant conditions on the chain. If a violation is detected, the issue is logged, and a Prometheus metric is set for the event. This component is designed to work exclusively with chains that are already utilizing the Fault Proofs system. This is a new version of the deprecated `chain-mon`, `faultproof-wd-mon`. For detailed information on how the component works and the algorithms used, please refer to the component README. - -Security monitors: Those tools monitor other aspects of several contracts used in optimism: - -* Global Events Monitor: made for taking YAML rules as configuration and monitoring the events that are emitted on the chain. -* Liveness Expiration Monitor: monitors the liveness expiration on Safes. -* Balances Monitor: emits a metric reporting the balances for the configured accounts. -* Multisig Monitor: The multisig monitor reports the paused status of the OptimismPortal contract. If set, reports the latest nonce of the configured Safe address and the latest presigned nonce stored in One Password.. The latest presigned nonce is identified by looking for items in the configured vault that follow a `ready-.json` name. The highest nonce of this item name format is reported. -* Drippie Monitor: tracks the execution and executability of drips within a Drippie contract. -* Secrets Monitor: takes a Drippie contract as a parameter and monitors for any drips within that contract that use the `CheckSecrets` dripcheck contract. `CheckSecrets` is a dripcheck that allows a drip to begin once a specific secret has been revealed (after a delay period) and cancels the drip if a second secret is revealed. Monitoring these secrets is important, as their revelation may indicate that the secret storage platform has been compromised and someone is attempting to exfiltrate the ETH controlled by the drip. - -For more information on these monitors and how to use them, [check out the repo](https://github.com/ethereum-optimism/monitorism?tab=readme-ov-file#monitorism). - -### `dispute-mon` - -Chain operators should consider running `op-dispute-mon`. It's an essential security monitoring service that tracks game statuses, providing visibility over the last 28 days. - -`dispute-mon` is set up and built the same way as `op-challenger`. This means that you can run it the same way (run `make op-dispute-mon` in the directory). - -A basic configuration option would look like this: - -``` -OP_DISPUTE_MON_LOG_FORMAT=logfmt -OP_DISPUTE_MON_METRICS_ENABLED=true -OP_DISPUTE_MON_METRICS_ADDR=0.0.0.0 -OP_DISPUTE_MON_METRICS_PORT=7300 - -OP_DISPUTE_MON_L1_ETH_RPC=.. -OP_DISPUTE_MON_ROLLUP_RPC=.. -OP_DISPUTE_MON_GAME_FACTORY_ADDRESS=.. - -OP_DISPUTE_MON_HONEST_ACTORS=.. -``` - -`OP_DISPUTE_MON_HONEST_ACTORS` is a CSV (no spaces) list of addresses that are used for the honest `op-challenger` instances. - -Additional flags: - -* `OP_DISPUTE_MON_GAME_WINDOW`: This is the window of time to report on games. It should leave a buffer beyond the max game duration for bond claiming. If Fault Proof game parameters are not changes (e.g. MAX\_CLOCK\_DURATION), it is recommended to leave this as the default. -* `OP_DISPUTE_MON_MONITOR_INTERVAL`: The interval at which to check for new games. Defaults to 30 seconds currently. -* `OP_DISPUTE_MON_MAX_CONCURRENCY`: The max thread count. Defaults to 5 currently. - -You can find more info on `op-dispute-mon` on [the repo](https://github.com/ethereum-optimism/optimism/tree/develop/op-dispute-mon). - -Chain operators can easily create their grafana dashboard for Dispute Monitor using the following json file: [Download the Dispute Monitor JSON](/resources/grafana/dispute-monitor-1718214549035.json). - -## Offchain component monitoring - -Offchain monitoring allows chain operators to monitor the operation and behavior of nodes and other offchain components. Some of the more common components that you'll likely want to monitor include `op-node`, `op-geth`, `op-proposer`, `op-batcher`, and `op-challenger`. -The general steps for enabling offchain monitoring are pretty consistent for all the OP components: - -1. Expose the monitoring port by enabling the `--metrics.enabled` flag -2. Customize the metrics port and address via the `--metrics.port` and `--metrics.addr` flags, respectively -3. Use [Prometheus](https://prometheus.io/) to scrape data from the metrics port -4. Save the data in `influxdb` -5. Share the data with [Grafana](https://grafana.com/) to build your custom dashboard - -### `op-node` - -`op-node` metrics and monitoring is detailed in the [Node Metrics and Monitoring](/builders/node-operators/management/metrics) guide. To enable metrics, pass the `--metrics.enabled` flag to `op-node` and follow the steps above for customization options. -See [this curated list](/builders/node-operators/management/metrics#important-metrics) for important metrics to track specifically for `op-node`. - -### `op-geth` - -To enable metrics, pass the `--metrics.enabled` flag to the op-geth. You can customize the metrics port and address via the `--metrics.port` and `--metrics.addr` flags, respectively. - -### `op-proposer` - -To enable metrics, pass the `--metrics.enabled` flag to the op-proposer. You can customize the metrics port and address via the `--metrics.port` and `--metrics.addr` flags, respectively. - -You can find more information about these flags in our [Proposer configuration doc](https://docs.optimism.io/builders/chain-operators/configuration/proposer#metricsenabled). - -### `op-batcher` - -To enable metrics, pass the `--metrics.enabled` flag to the op-batcher. You can customize the metrics port and address via the `--metrics.port` and `--metrics.addr` flags, respectively. - -You can find more information about these flags in our [Batcher configuration doc](https://docs.optimism.io/builders/chain-operators/configuration/batcher#metricsenabled). - -### `op-challenger` - -The `op-challenger` operates as the *honest actor* in the fault dispute system and defends the chain by securing the `OptimismPortal` and ensuring the game always resolves to the correct state of the chain. -For verifying the legitimacy of claims, `op-challenger` relies on a synced, trusted rollup node as well as a trace provider (e.g., [Cannon](/stack/fault-proofs/cannon)). See the [OP-Challenger Explainer](/stack/fault-proofs/challenger) for more information on this service. - -To enable metrics, pass the `--metrics.enabled` flag to `op-challenger` and follow the steps above for customization options. - -``` - --metrics.addr value (default: "0.0.0.0") ($OP_CHALLENGER_METRICS_ADDR) - Metrics listening address - - --metrics.enabled (default: false) ($OP_CHALLENGER_METRICS_ENABLED) - Enable the metrics server - - --metrics.port value (default: 7300) ($OP_CHALLENGER_METRICS_PORT) - Metrics listening port -``` - -## Next steps - -* If you encounter difficulties at any stage of this process, please reach out to [developer support](https://github.com/ethereum-optimism/developers/discussions). diff --git a/pages/builders/chain-operators/tools/explorer.mdx b/pages/builders/chain-operators/tools/explorer.mdx deleted file mode 100644 index 3046fe498..000000000 --- a/pages/builders/chain-operators/tools/explorer.mdx +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: Block explorer -lang: en-US -description: Learn how to deploy a Blockscout block explorer for your OP Stack chain. ---- - -import { Callout } from 'nextra/components' - -# Deploying a block explorer - -[Blockscout](https://www.blockscout.com/) is an open source block explorer that supports OP Stack chains. -Keep reading for a quick overview on how to deploy Blockscout for your OP Stack chain. - - - Check out the [Blockscout documentation](https://docs.blockscout.com) for up-to-date information on how to deploy and maintain a Blockscout instance. - - -## Dependencies - -* [Docker](https://docs.docker.com/get-docker/) - -## Create an archive node - -Blockscout needs access to an [archive node](https://www.alchemy.com/overviews/archive-nodes#archive-nodes) for your OP Stack chain to properly index transactions, blocks, and internal interactions. -If using `op-geth`, you can run a node in archive mode with the `--gcmode=archive` flag. - - - Archive nodes take up significantly more disk space than full nodes. - You may need to have 2-4 terabytes of disk space available (ideally SSD) if you intend to run an archive node for a production OP Stack chain. - 1-200 gigabytes of disk space may be sufficient for a development chain. - - -## Installation - -Blockscout can be started from its source code on GitHub. - -```sh -git clone https://github.com/blockscout/blockscout.git -b production-optimism -cd blockscout/docker-compose -``` - -## Configuration - -Review the configuration files within the `envs` directory and make any necessary changes. -In particular, make sure to review `envs/common-blockscout.env` and `envs/common-frontend.env`. - -## Starting Blockscout - -Start Blockscout with the following command: - -```sh -DOCKER_REPO=blockscout-optimism docker compose -f geth.yml up -``` - -## Usage - -### Explorer - -After Blockscout is started, browse to [http://localhost](http://localhost) to view the user interface. -Note that this URL may differ if you have changed the Blockscout configuration. - -### API - -Blockscout provides both a REST API and a GraphQL API. -Refer to the [API documentation](https://docs.blockscout.com/for-users/api) for more information. diff --git a/pages/builders/chain-operators/tools/op-challenger.mdx b/pages/builders/chain-operators/tools/op-challenger.mdx deleted file mode 100644 index f7532d072..000000000 --- a/pages/builders/chain-operators/tools/op-challenger.mdx +++ /dev/null @@ -1,223 +0,0 @@ ---- -title: How to configure challenger for your chain -lang: en-US -description: Learn how to configure challenger for your OP Stack chain. ---- - -import { Callout, Steps } from 'nextra/components' - -# How to configure challenger for your chain - -This guide provides a walkthrough of setting up the configuration and monitoring options for `op-challenger`. See the [OP-Challenger Explainer](/stack/fault-proofs/challenger) for a general overview of this fault proofs feature. - - - ### Build the executable - - * Clone the monorepo - - ```bash - git clone https://github.com/ethereum-optimism/optimism.git - ``` - - * Check out the [latest release of `op-challenger`](https://github.com/ethereum-optimism/optimism/releases/tag/op-challenger%2Fv1.0.1) and use the commit to deploy. Alternatively, chain operators can use the prebuilt [challenger docker images](https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-challenger:v1.0.1). - If a Docker image is used, it already comes with `op-program` server and an executable for Cannon embedded, so the Cannon bin doesn't need to be specified. - - ```bash - git checkout op-challenger/vX.Y.Z - ``` - - - Chain operators need to specify the arguments and `op-program` server if `op-challenger` is running outside of Docker, but there's a Cannon server option which points to `op-program`'s executable. - - - * Build challenger - - ```bash - cd optimism - pnpm install - make op-challenger - ``` - - ### Configure challenger - - * Configure challenger with the required flags. Tip: Use the `op-challenger --help` to view all subcommands, command line, and environment variable options. - * The example config file below shows the flags to configure in this step: - - ```docker - challenger: - user: "1000" - image: us-docker.pkg.dev/oplabs-tools-artifacts/images/op-challenger:v0.2.11 - command: - - "op-challenger" - - "--l1-eth-rpc=http://sepolia-el-1:8545" - - "--l1-beacon=http://sepolia-cl-1:5051" - - "--l2-eth-rpc=http://op-sepolia-el-1:8545" - - "--rollup-rpc=http://op-sepolia-cl-1:5051" - - "--selective-claim-resolution" - - "--private-key=...." - - "--network=..." - - "--datadir=/data" - - "--cannon-prestates-url=..." - volumes: - - "./challenger-data:/data" - ``` - - #### `--l1-eth-rpc` - - * This is the HTTP provider URL for a standard L1 node, can be a full node. `op-challenger` will be sending many requests, so chain operators need a node that is trusted and can easily handle many transactions. - * Note: Challenger has a lot of money, and it will spend it if it needs to interact with games. That might risk not defending games or challenging games correctly, so chain operators should really trust the nodes being pointed at Challenger. - - #### `--l1-beacon` - - * This is needed just to get blobs from. - * In some instances, chain operators might need a blob archiver or L1 consensus node configured not to prune blobs: - * If the chain is proposing regularly, a blob archiver isn't needed. There's only a small window in the blob retention period that games can be played. - * If the chain doesn't post a valid output root in 18 days, then a blob archiver running a challenge game is needed. If the actor gets pushed to the bottom of the game, it could lose if it's the only one protecting the chain. - - #### `--l2-eth-rpc` - - * This needs to be `op-geth` archive node, with `debug` enabled. - * Technically doesn't need to go to bedrock, but needs to have access to the start of any game that is still in progress. - - #### `--rollup-rpc` - - * This needs to be an `op-node` archive node because challenger needs access to output roots from back when the games start. See below for important configuration details: - - 1. Safe Head Database (SafeDB) Configuration for op-node: - - * The `op-node` behind the `op-conductor` must have the SafeDB enabled to ensure it is not stateless. - * To enable SafeDB, set the `--safedb.path` value in your configuration. This specifies the file path used to persist safe head update data. - * Example Configuration: - - ``` - --safedb.path # Replace with your actual path - ``` - - - If this path is not set, the SafeDB feature will be disabled. - - - 2. Ensuring Historical Data Availability: - - * Both `op-node` and `op-geth` must have data from the start of the games to maintain network consistency and allow nodes to reference historical state and transactions. - * For `op-node`: Configure it to maintain a sufficient history of blockchain data locally or use an archive node. - * For `op-geth`: Similarly, configure to store or access historical data. - * Example Configuration: - - ``` - op-node \ - --rollup-rpc \ - --safedb.path - ``` - - - Replace `` with the URL of your archive node and `` with the desired path for storing SafeDB data. - - - #### `--private-key` - - * Chain operators must specify a private key or use something else (like `op-signer`). - * This uses the same transaction manager arguments as `op-node` , batcher, and proposer, so chain operators can choose one of the following options: - * a mnemonic - * a private key - * `op-signer` endpoints - - #### `--network` - - * This identifies the L2 network `op-challenger` is running for, e.g., `op-sepolia` or `op-mainnet`. - * When using the `--network` flag, the `--game-factory-address` will be automatically pulled from the [`superchain-registry`](https://github.com/ethereum-optimism/superchain-registry/blob/main/chainList.json). - * When cannon is executed, challenger needs the roll-up config and the L2 Genesis, which is op-geth's Genesis file. Both files are automatically loaded when Cannon Network is used, but custom networks will need to specify both Cannon L2 Genesis and Cannon rollup config. - * For custom networks not in the [`superchain-registry`](https://github.com/ethereum-optimism/superchain-registry/blob/main/chainList.json), the `--game-factory-address` and rollup must be specified, as follows: - - ``` - --cannon-rollup-config rollup.json \ - --cannon-l2-genesis genesis-l2.json \ - # use this if running challenger outside of the docker image - --cannon-server ./op-program/bin/op-program \ - # json or url, version of op-program deployed on chain - # if you use the wrong one, you will lose the game - # if you deploy your own contracts, you specify the hash, the root of the json file - # op mainnet are tagged versions of op-program - # make reproducible prestate - # challenger verifies that onchain - --cannon-prestate ./op-program/bin/prestate.json \ - # load the game factory address from system config or superchain registry - # point the game factory address at the dispute game factory proxy - --game-factory-address - ``` - - - These options vary based on which `--network` is specified. Chain operators always need to specify a way to load prestates and must also specify the cannon-server whenever the docker image isn't being used. - - - #### `--datadir` - - * This is a directory that `op-challenger` can write to and store whatever data it needs. It will manage this directory to add or remove data as needed under that directory. - * If running in docker, it should point to a docker volume or mountpoint, so the data isn't lost on every restart. The data can be recreated if needed but particularly if challenger has executed cannon as part of responding to a game it may mean a lot of extra processing. - - #### `--cannon-prestates-url` - - The pre-state is effectively the version of `op-program` that is deployed on chain. And chain operators must use the right version. `op-challenger` will refuse to interact with games that have a different absolute prestate hash to avoid making invalid claims. If deploying your own contracts, chain operators must specify an absolute prestate hash taken from the `make reproducible-prestate` command during contract deployment, which will also build the required prestate json file. - - All governance approved releases use a tagged version of `op-program`. These can be rebuilt by checking out the version tag and running `make reproducible-prestate`. - - * There are two ways to specify the prestate to use: - * `--cannon-prestate`: specifies a path to a single Cannon pre-state Json file - * `--cannon-prestates-url`: specifies a URL to load pre-states from. This enables participating in games that use different prestates, for example due to a network upgrade. The prestates are stored in this directory named by their hash. - * Example final URL for a prestate: - * [https://example.com/prestates/0x031e3b504740d0b1264e8cf72b6dde0d497184cfb3f98e451c6be8b33bd3f808.json](https://example.com/prestates/0x031e3b504740d0b1264e8cf72b6dde0d497184cfb3f98e451c6be8b33bd3f808.json) - * This file contains the cannon memory state. - - - Challenger will refuse to interact with any games if it doesn't have the matching prestate. - - - ### Execute challenger - - The final step is to execute challenger with the required flags. It will look something like this (but with required flags added): - - ```bash - ./op-challenger/bin/op-challenger \ - --trace-type cannon \ - --l1-eth-rpc http://localhost:8545 \ - --rollup-rpc http://localhost:9546 \ - --game-factory-address $DISPUTE_GAME_FACTORY \ - --datadir temp/challenger-data \ - --cannon-rollup-config .devnet/rollup.json \ - --cannon-l2-genesis .devnet/genesis-l2.json \ - --cannon-bin ./cannon/bin/cannon \ - --cannon-server ./op-program/bin/op-program \ - --cannon-prestate ./op-program/bin/prestate.json \ - --l2-eth-rpc http://localhost:9545 \ - --mnemonic "test test test test test test test test test test test junk" \ - --hd-path "m/44'/60'/0'/0/8" \ - ``` - - ### Test and debug challenger (optional) - - This is an optional step to use `op-challenger` subcommands, which allow chain operators to interact with the Fault Proof System onchain for testing and debugging purposes. For example, it is possible to test and explore the system in the following ways: - - * create games yourself, and it doesn't matter if the games are valid or invalid. - * perform moves in games and then claim and resolve things. - - Here's the list of op-challenger subcommands: - - | subcommand | description | - | --------------- | -------------------------------------------------------- | - | `list-games` | List the games created by a dispute game factory | - | `list-claims` | List the claims in a dispute game | - | `list-credits` | List the credits in a dispute game | - | `create-game` | Creates a dispute game via the factory | - | `move` | Creates and sends a move transaction to the dispute game | - | `resolve` | Resolves the specified dispute game if possible | - | `resolve-claim` | Resolves the specified claim if possible | - - Additionally, chain operators should consider running `op-dispute-mon`. It's an incredibly useful security monitoring service to keep an eye on games, basically giving chain operators visibility into all the status of the games for the last 28 days. - Chain operators can easily create their grafana dashboard for Dispute Monitor using the following json file: [Download the Dispute Monitor JSON](/resources/grafana/dispute-monitor-1718214549035.json). - - -## Next steps - -* Additional questions? See the FAQ section in the [OP Challenger Explainer](/stack/fault-proofs/challenger). -* For more detailed info on `op-challenger`, see the [specs](https://specs.optimism.io/fault-proof/stage-one/honest-challenger-fdg.html). -* If you experience any problems, please reach out to [developer support](https://github.com/ethereum-optimism/developers/discussions). diff --git a/pages/builders/chain-operators/tools/op-conductor.mdx b/pages/builders/chain-operators/tools/op-conductor.mdx deleted file mode 100644 index 50acf2fcb..000000000 --- a/pages/builders/chain-operators/tools/op-conductor.mdx +++ /dev/null @@ -1,730 +0,0 @@ ---- -title: Conductor -lang: en-US -description: Learn what the op-conductor is and how to use it to create a highly available and reliable sequencer. ---- - -import { Callout, Tabs, Steps } from 'nextra/components' - -# Conductor - -This page will teach you what the `op-conductor` service is and how it works on -a high level. It will also get you started on setting it up in your own -environment. - -## Enhancing sequencer reliability and availability - -The [op-conductor](https://github.com/ethereum-optimism/optimism/tree/develop/op-conductor) -is an auxiliary service designed to enhance the reliability and availability of -a sequencer within high-availability setups. By minimizing the risks -associated with a single point of failure, the op-conductor ensures that the -sequencer remains operational and responsive. - -### Assumptions - -It is important to note that the `op-conductor` does not incorporate Byzantine -fault tolerance (BFT). This means the system operates under the assumption that -all participating nodes are honest and act correctly. - -### Summary of guarantees - -The design of the `op-conductor` provides the following guarantees: - -* **No Unsafe Reorgs** -* **No Unsafe Head Stall During Network Partition** -* **100% Uptime with No More Than 1 Node Failure** - -## Design - -![op-conductor.](/img/builders/chain-operators/op-conductor.svg) - -**On a high level, `op-conductor` serves the following functions:** - -### Raft consensus layer participation - -* **Leader determination:** Participates in the Raft consensus algorithm to - determine the leader among sequencers. -* **State management:** Stores the latest unsafe block ensuring consistency - across the system. - -### RPC request handling - -* **Admin RPC:** Provides administrative RPCs for manual recovery scenarios, - including, but not limited to: stopping the leadership vote and removing itself - from the cluster. -* **Health RPC:** Offers health RPCs for the `op-node` to determine whether it - should allow the publishing of transactions and unsafe blocks. - -### Sequencer health monitoring - -* Continuously monitors the health of the sequencer (op-node) to ensure - optimal performance and reliability. - -### Control loop management - -* Implements a control loop to manage the status of the sequencer (op-node), - including starting and stopping operations based on different scenarios and - health checks. - -## Conductor state transition - -The following is a state machine diagram of how the op-conductor manages the -sequencers Raft consensus. - -![op-conductor-state-transition.](/img/builders/chain-operators/op-conductor-state-transition.svg) - -**Helpful tips:** To better understand the graph, focus on one node at a time, -understand what can be transitioned to this current state and how it can -transition to other states. This way you could understand how we handle the -state transitions. - -## Setup - -At OP Labs, op-conductor is deployed as a kubernetes statefulset because it -requires a persistent volume to store the raft log. This guide describes -setting up conductor on an existing network without incurring downtime. - -You can utilize the [op-conductor-ops](https://github.com/ethereum-optimism/infra/tree/main/op-conductor-ops) tool to confirm the conductor status between the steps. - -### Assumptions - -This setup guide has the following assumptions: - -* 3 deployed sequencers (sequencer-0, sequencer-1, sequencer-2) that are all - in sync and in the same vpc network -* sequencer-0 is currently the active sequencer -* You can execute a blue/green style sequencer deployment workflow that - involves no downtime (described below) -* conductor and sequencers are running in k8s or some other container - orchestrator (vm-based deployment may be slightly different and not covered - here) - -### Spin up op-conductor - - - {

Deploy conductor

} - - Deploy a conductor instance per sequencer with sequencer-1 as the raft cluster - bootstrap node: - - * suggested conductor configs: - - ```yaml - OP_CONDUCTOR_CONSENSUS_ADDR: '' - OP_CONDUCTOR_CONSENSUS_PORT: '50050' - OP_CONDUCTOR_EXECUTION_RPC: ':8545' - OP_CONDUCTOR_HEALTHCHECK_INTERVAL: '1' - OP_CONDUCTOR_HEALTHCHECK_MIN_PEER_COUNT: '2' # set based on your internal p2p network peer count - OP_CONDUCTOR_HEALTHCHECK_UNSAFE_INTERVAL: '5' # recommend a 2-3x multiple of your network block time to account for temporary performance issues - OP_CONDUCTOR_LOG_FORMAT: logfmt - OP_CONDUCTOR_LOG_LEVEL: info - OP_CONDUCTOR_METRICS_ADDR: 0.0.0.0 - OP_CONDUCTOR_METRICS_ENABLED: 'true' - OP_CONDUCTOR_METRICS_PORT: '7300' - OP_CONDUCTOR_NETWORK: '' - OP_CONDUCTOR_NODE_RPC: ':8545' - OP_CONDUCTOR_RAFT_SERVER_ID: 'unique raft server id' - OP_CONDUCTOR_RAFT_STORAGE_DIR: /conductor/raft - OP_CONDUCTOR_RPC_ADDR: 0.0.0.0 - OP_CONDUCTOR_RPC_ENABLE_ADMIN: 'true' - OP_CONDUCTOR_RPC_ENABLE_PROXY: 'true' - OP_CONDUCTOR_RPC_PORT: '8547' - ``` - - * sequencer-1 op-conductor extra config: - - ```yaml - OP_CONDUCTOR_PAUSED: "true" - OP_CONDUCTOR_RAFT_BOOTSTRAP: "true" - ``` - - {

Pause two conductors

} - - Pause `sequencer-0` &` sequencer-2` conductors with [conductor_pause](#conductor_pause) - RPC request. - - {

Update op-node configuration and switch the active sequencer

} - - Deploy an `op-node` config update to all sequencers that enables conductor. Use - a blue/green style deployment workflow that switches the active sequencer to - `sequencer-1`: - - * all sequencer op-node configs: - - ```yaml - OP_NODE_CONDUCTOR_ENABLED: "true" # this is what commits unsafe blocks to the raft logs - OP_NODE_RPC_ADMIN_STATE: "" # this flag can't be used with conductor - ``` - - {

Confirm sequencer switch was successful

} - - Confirm `sequencer-1` is active and successfully producing unsafe blocks. - Because `sequencer-1` was the raft cluster bootstrap node, it is now committing - unsafe payloads to the raft log. - - {

Add voting nodes

} - - Add voting nodes to cluster using [conductor_AddServerAsVoter](#conductor_addserverasvoter) - RPC request to the leader conductor (`sequencer-1`) - - {

Confirm state

} - - Confirm cluster membership and sequencer state: - - * `sequencer-0` and `sequencer-2`: - 1. raft cluster follower - 2. sequencer is stopped - 3. conductor is paused - 4. conductor enabled in op-node config - - * `sequencer-1` - 1. raft cluster leader - 2. sequencer is active - 3. conductor is paused - 4. conductor enabled in op-node config - - {

Resume conductors

} - - Resume all conductors with [conductor\_resume](#conductor_resume) RPC request to - each conductor instance. - - {

Confirm state

} - - Confirm all conductors successfully resumed with [conductor_paused](#conductor_paused) - - {

Transfer leadership

} - - Trigger leadership transfer to `sequencer-0` using [conductor_transferLeaderToServer](#conductor_transferleadertoserver) - - {

Confirm state

} - - * `sequencer-1` and `sequencer-2`: - 1. raft cluster follower - 2. sequencer is stopped - 3. conductor is active - 4. conductor enabled in op-node config - - * `sequencer-0` - 1. raft cluster leader - 2. sequencer is active - 3. conductor is active - 4. conductor enabled in op-node config - - {

Update configuration

} - - Deploy a config change to `sequencer-1` conductor to remove the - `OP_CONDUCTOR_PAUSED: true` flag and `OP_CONDUCTOR_RAFT_BOOTSTRAP` flag. -
- -#### Blue/green deployment - -In order to ensure there is no downtime when setting up conductor, you need to -have a deployment script that can update sequencers without network downtime. - -An example of this workflow might look like: - -1. Query current state of the network and determine which sequencer is - currently active (referred to as "original" sequencer below). - From the other available sequencers, choose a candidate sequencer. -2. Deploy the change to the candidate sequencer and then wait for it to sync - up to the original sequencer's unsafe head. You may want to check peer counts - and other important health metrics. -3. Stop the original sequencer using `admin_stopSequencer` which returns the - last inserted unsafe block hash. Wait for candidate sequencer to sync with - this returned hash in case there is a delta. -4. Start the candidate sequencer at the original's last inserted unsafe block - hash. - 1. Here you can also execute additional check for unsafe head progression - and decide to roll back the change (stop the candidate sequencer, start the - original, rollback deployment of candidate, etc.) -5. Deploy the change to the original sequencer, wait for it to sync to the - chain head. Execute health checks. - -#### Post-conductor launch deployments - -After conductor is live, a similar canary style workflow is used to ensure -minimal downtime in case there is an issue with deployment: - -1. Choose a candidate sequencer from the raft-cluster followers -2. Deploy to the candidate sequencer. Run health checks on the candidate. -3. Transfer leadership to the candidate sequencer using - `conductor_transferLeaderToServer`. Run health checks on the candidate. -4. Test if candidate is still the leader using `conductor_leader` after some - grace period (ex: 30 seconds) - 1. If not, then there is likely an issue with the deployment. Roll back. -5. Upgrade the remaining sequencers, run healthchecks. - -### Configuration options - -It is configured via its [flags / environment variables](https://github.com/ethereum-optimism/optimism/blob/develop/op-conductor/flags/flags.go) - -#### --consensus.addr (`CONSENSUS_ADDR`) - -* **Usage:** Address to listen for consensus connections -* **Default Value:** 127.0.0.1 -* **Required:** yes - -#### --consensus.port (`CONSENSUS_PORT`) - -* **Usage:** Port to listen for consensus connections -* **Default Value:** 50050 -* **Required:** yes - -#### --raft.bootstrap (`RAFT_BOOTSTRAP`) - - - For bootstrapping a new cluster. This should only be used on the sequencer - that is currently active and can only be started once with this flag, - otherwise the flag has to be removed or the raft log must be deleted before - re-bootstrapping the cluster. - - -* **Usage:** If this node should bootstrap a new raft cluster -* **Default Value:** false -* **Required:** no - -#### --raft.server.id (`RAFT_SERVER_ID`) - -* **Usage:** Unique ID for this server used by raft consensus -* **Default Value:** None specified -* **Required:** yes - -#### --raft.storage.dir (`RAFT_STORAGE_DIR`) - -* **Usage:** Directory to store raft data -* **Default Value:** None specified -* **Required:** yes - -#### --node.rpc (`NODE_RPC`) - -* **Usage:** HTTP provider URL for op-node -* **Default Value:** None specified -* **Required:** yes - -#### --execution.rpc (`EXECUTION_RPC`) - -* **Usage:** HTTP provider URL for execution layer -* **Default Value:** None specified -* **Required:** yes - -#### --healthcheck.interval (`HEALTHCHECK_INTERVAL`) - -* **Usage:** Interval between health checks -* **Default Value:** None specified -* **Required:** yes - -#### --healthcheck.unsafe-interval (`HEALTHCHECK_UNSAFE_INTERVAL`) - -* **Usage:** Interval allowed between unsafe head and now measured in seconds -* **Default Value:** None specified -* **Required:** yes - -#### --healthcheck.safe-enabled (`HEALTHCHECK_SAFE_ENABLED`) - -* **Usage:** Whether to enable safe head progression checks -* **Default Value:** false -* **Required:** no - -#### --healthcheck.safe-interval (`HEALTHCHECK_SAFE_INTERVAL`) - -* **Usage:** Interval between safe head progression measured in seconds -* **Default Value:** 1200 -* **Required:** no - -#### --healthcheck.min-peer-count (`HEALTHCHECK_MIN_PEER_COUNT`) - -* **Usage:** Minimum number of peers required to be considered healthy -* **Default Value:** None specified -* **Required:** yes - -#### --paused (`PAUSED`) - - - There is no configuration state, so if you unpause via RPC and then restart, - it will start paused again. - - -* **Usage:** Whether the conductor is paused -* **Default Value:** false -* **Required:** no - -#### --rpc.enable-proxy (`RPC_ENABLE_PROXY`) - -* **Usage:** Enable the RPC proxy to underlying sequencer services -* **Default Value:** true -* **Required:** no - -### RPCs - -Conductor exposes [admin RPCs](https://github.com/ethereum-optimism/optimism/blob/develop/op-conductor/rpc/api.go#L17) -on the `conductor` namespace. - -#### conductor_overrideLeader - -`OverrideLeader` is used to override the leader status, this is only used to -return true for `Leader()` & `LeaderWithID()` calls. It does not impact the -actual raft consensus leadership status. It is supposed to be used when the -cluster is unhealthy and the node is the only one up, to allow batcher to -be able to connect to the node so that it could download blocks from the -manually started sequencer. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_overrideLeader","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_overrideLeader --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_pause - -`Pause` pauses op-conductor. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_pause","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_pause --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_resume - -`Resume` resumes op-conductor. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_resume","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_resume --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_paused - -Paused returns true if the op-conductor is paused. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_paused","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_paused --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_stopped - -Stopped returns true if the op-conductor is stopped. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_stopped","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_stopped --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor\_sequencerHealthy - -SequencerHealthy returns true if the sequencer is healthy. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_sequencerHealthy","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_sequencerHealthy --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_leader - - - API related to consensus. - - -Leader returns true if the server is the leader. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_leader","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_leader --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_leaderWithID - - - API related to consensus. - - -LeaderWithID returns the current leader's server info. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_leaderWithID","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_leaderWithID --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_addServerAsVoter - - - API related to consensus. - - -AddServerAsVoter adds a server as a voter to the cluster. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_addServerAsVoter","params":[, , ],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_addServerAsVoter --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_addServerAsNonvoter - - - API related to consensus. - - -AddServerAsNonvoter adds a server as a non-voter to the cluster. non-voter -The non-voter will not participate in the leader election. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_addServerAsNonvoter","params":[, , ],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_addServerAsNonvoter --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_removeServer - - - API related to consensus. - - -RemoveServer removes a server from the cluster. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_removeServer","params":[, ],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_removeServer --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_transferLeader - - - API related to consensus. - - -TransferLeader transfers leadership to another server (resigns). - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_transferLeader","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_transferLeader --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_transferLeaderToServer - - - API related to consensus. - - -TransferLeaderToServer transfers leadership to a specific server. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_transferLeaderToServer","params":[, , ],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_transferLeaderToServer --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_clusterMembership - -ClusterMembership returns the current cluster membership configuration. - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_clusterMembership","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_clusterMembership --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_active - - - API called by `op-node`. - - -Active returns true if the op-conductor is active (not paused or stopped). - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_active","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_active --rpc-url http://127.0.0.1:8547 - ``` - - - -#### conductor_commitUnsafePayload - - - API called by `op-node`. - - -CommitUnsafePayload commits an unsafe payload (latest head) to the consensus -layer. TODO - usage examples that include required params are needed - - - - ```sh - curl -X POST -H "Content-Type: application/json" --data \ - '{"jsonrpc":"2.0","method":"conductor_commitUnsafePayload","params":[],"id":1}' \ - http://127.0.0.1:8547 - ``` - - - - ```sh - cast rpc conductor_commitUnsafePayload --rpc-url http://127.0.0.1:8547 - ``` - - - -## Next steps - -* Checkout [op-conductor-mon](https://github.com/ethereum-optimism/infra): - which monitors multiple op-conductor instances and provides a unified interface - for reporting metrics. -* Get familiar with [op-conductor-ops](https://github.com/ethereum-optimism/infra/tree/main/op-conductor-ops)to interact with op-conductor. diff --git a/pages/builders/chain-operators/tools/op-deployer.mdx b/pages/builders/chain-operators/tools/op-deployer.mdx deleted file mode 100644 index 0a814a46e..000000000 --- a/pages/builders/chain-operators/tools/op-deployer.mdx +++ /dev/null @@ -1,148 +0,0 @@ ---- -title: Deployer -lang: en-US -tags: ["op-deployer","eng-platforms"] -description: Learn how op-deployer can simplify deploying a standard OP Stack Chain. ---- - -import {Callout, Steps} from 'nextra/components' - -# Deployer - -`op-deployer` simplifies the process of deploying the OP Stack. It works similarly to [Terraform](https://www.terraform.io). Like Terraform, you define a declarative config file called an "intent," then run a command to apply the intent to your chain. `op-deployer` will compare the state of your chain against the intent, and make whatever changes are necessary for them to match. In its current state, it is intended to deploy new standard chains that utilize the Superchain wide contracts. - -## Installation - -The recommended way to install `op-deployer` is to download the latest release from the monorepo's -[release page](https://github.com/ethereum-optimism/optimism/releases). To install a release, download the binary -for your platform then extract it somewhere on your `PATH`. The rest of this tutorial will assume that you have -installed `op-deployer` using this method. - -## Deployment usage - -The base use case for `op-deployer` is deploying new OP Chains. This process is broken down into three steps: - - - -### `init`: configure your chain - -To get started with `op-deployer`, create an intent file that defines your desired chain configuration. Use the built-in `op-deployer` utility to generate this file: - -``` -./bin/op-deployer init --l1-chain-id 11155111 --l2-chain-ids --workdir .deployer -``` - -This command will create a directory called `.deployer` in your current working directory containing the intent file and an empty `state.json` file. `state.json` is populated with the results of your deployment, and never needs to be edited directly. - -Your intent file will need to be modified to your parameters, but it will initially look something like this: - - - Do not use the default addresses in the intent for a production chain! They are generated from the `test... junk` - mnemonic. **Any funds they hold will be stolen on a live chain.** - - - -```toml -deploymentStrategy = "live" # Deploying a chain to a live network i.e. Sepolia -l1ChainID = 11155111 # The chain ID of the L1 chain you'll be deploying to -fundDevAccounts = true # Whether or not to fund dev accounts using the test... junk mnemonic on L2. -l1ContractsLocator = "tag://op-contracts/v1.6.0" # L1 smart contracts versions -l2ContractsLocator = "tag://op-contracts/v1.7.0-beta.1+l2-contracts" # L2 smart contracts versions - -# Delete this table if you are using the shared Superchain contracts on the L1 -# If you are deploying your own SuperchainConfig and ProtocolVersions contracts, fill in these details -[superchainRoles] - proxyAdminOwner = "0xb9cdf788704088a4c0191d045c151fcbe2db14a4" - protocolVersionsOwner = "0x85d646ed26c3f46400ede51236d8d7528196849b" - guardian = "0x8c7e4a51acb17719d225bd17598b8a94b46c8767" - -# List of L2s to deploy. op-deployer can deploy multiple L2s at once -[[chains]] - # Your chain's ID, encoded as a 32-byte hex string - id = "0x0000000000000000000000000000000000000000000000000000000000003039" - # Update the fee recipient contract - baseFeeVaultRecipient = "0x0000000000000000000000000000000000000000" - l1FeeVaultRecipient = "0x0000000000000000000000000000000000000000" - sequencerFeeVaultRecipient = "0x0000000000000000000000000000000000000000" - eip1559Denominator = 50 - eip1559Elasticity = 6 - # Various ownership roles for your chain. When you use op-deployer init, these roles are generated using the - # test... junk mnemonic. You should replace these with your own addresses for production chains. - [chains.roles] - l1ProxyAdminOwner = "0x1a66b55a4f0139c32eddf4f8c60463afc3832e76" - l2ProxyAdminOwner = "0x7759a8a43aa6a7ee9434ddb597beed64180c40fd" - systemConfigOwner = "0x8e35d9523a0c4c9ac537d254079c2398c6f3b35f" - unsafeBlockSigner = "0xbb19dce4ce51f353a98dbab31b5fa3bc80dc7769" - batcher = "0x0e9c62712ab826e06b16b2236ce542f711eaffaf" - proposer = "0x86dfafe0689e20685f7872e0cb264868454627bc" - challenger = "0xf1658da627dd0738c555f9572f658617511c49d5" - -``` - -By default, `op-deployer` will fill in all other configuration variables with those that match the [standard configuration](https://specs.optimism.io/protocol/configurability.html). You can override these default settings by adding them to your intent file using the table below: - -```toml -[globalDeployOverrides] - l2BlockTime = 1 # 1s L2blockTime is also standard, op-deployer defaults to 2s -``` - -You can also do chain by chain configurations in the `chains` table. - -### `apply`: deploy your chain - - - Hardware wallets are not supported, but you can use ephemeral hot wallets since this deployer key has no privileges. - - -Now that you've created your intent file, you can apply it to your chain to deploy the L1 smart contracts: - -``` -op-deployer apply --workdir .deployer --l1-rpc-url --private-key -``` - -This command will deploy the OP Stack to L1. It will deploy all L2s specified in the intent file. Superchain -configuration will be set to the Superchain-wide defaults - i.e., your chain will be opted into the [Superchain pause](https://specs.optimism.io/protocol/superchain-configuration.html#pausability) -and will use the same [protocol versions](https://github.com/ethereum-optimism/specs/blob/main/specs/protocol/superchain-upgrades.md) -address as other chains on the Superchain. - -### `inspect`: generate genesis files and chain information - - - To add your chain to the [Superchain Registry](https://github.com/ethereum-optimism/superchain-registry) you will need to provide the chain artifacts. To get these chain artifacts, you will need to write the output of these commands to new files. - - -Inspect the `state.json` file by navigating to your working directory. With the contracts deployed, generate the genesis and rollup configuration files by running the following commands: - -``` -op-deployer inspect genesis --workdir .deployer > .deployer/genesis.json -op-deployer inspect rollup --workdir .deployer > .deployer/rollup.json -``` - -Now that you have your `genesis.json` and `rollup.json` you can spin up a node on your network. You can also use the following inspect subcommands to get additional data: - -``` -op-deployer inspect l1 --workdir .deployer # outputs all L1 contract addresses for an L2 chain -op-deployer inspect deploy-config --workdir .deployer # outputs the deploy config for an L2 chain -op-deployer inspect l2-semvers --workdir .deployer # outputs the semvers for all L2 chains -``` - - -## Bootstrap usage - -You can also use `op-deployer` to deploy the contracts needed to run the `init`... `apply` flow on new chains. This process, called 'bootstrapping,' is useful when you want to use `op-deployer` with L3s, new testnets, or other custom settlement chains. - -### OPCM bootstrap - -To deploy OPCM to a new chain, run the following command: - -```bash -op-deployer bootstrap opcm \ - --l1-rpc-url \ - --private-key \ - --artifacts-locator tag://op-contracts/v1.6.0 -``` - -## Next steps - -* For more details, check out the tool and documentation in the [op-deployer repository](https://github.com/ethereum-optimism/optimism/tree/develop/op-deployer/cmd/op-deployer). -* For more information on OP Contracts Manager, refer to the [OPCM documentation](/stack/opcm). diff --git a/pages/builders/chain-operators/tools/op-txproxy.mdx b/pages/builders/chain-operators/tools/op-txproxy.mdx deleted file mode 100644 index 83f7c7a97..000000000 --- a/pages/builders/chain-operators/tools/op-txproxy.mdx +++ /dev/null @@ -1,86 +0,0 @@ ---- -title: op-txproxy -lang: en-US -description: A passthrough proxy service that can apply additional constraints on transactions prior to reaching the sequencer. ---- - -import { Callout, Steps } from 'nextra/components' - -# op-txproxy - -A [passthrough proxy](https://github.com/ethereum-optimism/infra/tree/main/op-txproxy) for the execution engine endpoint. This proxy does not forward all RPC traffic and only exposes a specific set of methods. Operationally, the ingress router should only re-route requests for these specific methods. - - - [proxyd](./proxyd) as an ingress router supports the mapping of specific methods to unique backends. - -## Methods - -### **eth_sendRawTransactionConditional** - -To safely expose this endpoint publicly, additional stateless constraints are applied. These constraints help scale validation rules horizontally and preemptively reject conditional transactions before they reach the sequencer. - -Various metrics are emitted to guide necessary adjustments. -#### Runtime shutoff - -This service can be configured with a flag or environment variable to reject conditional transactions without needing to interrupt the execution engine. This feature is useful for diagnosing issues. - -`--sendRawTxConditional.enabled (default: true) ($OP_TXPROXY_SENDRAWTXCONDITIONAL_ENABLED)` - -When disabled, requests will fail with the `-32003` (transaction rejected) json rpc error code with a message stating that the method is disabled. -#### Rate limits - -Even though the op-geth implementation of this endpoint includes rate limits, it is instead applied here to terminate these requests early. - -`--sendRawTxConditional.ratelimit (default: 5000) ($OP_TXPROXY_SENDRAWTXCONDITIONAL_RATELIMIT)` - -#### Stateless validation - -* Conditional cost is below the max -* Conditional values are valid (i.e min \< max) -* Transaction target are only 4337 Entrypoint contracts - - - The motivating factor for this endpoint is to enable permissionless 4337 mempools, hence the restricted usage of this methods to just [Entrypoint](https://github.com/eth-infinitism/account-abstraction/blob/develop/contracts/core/EntryPoint.sol) transactions. - - Please open up an issue if you'd like this restriction to be optional via configuration to broaden usage of this endpoint. - - -When the request passes validation, it is passed through to the configured backend URL - -`--sendRawTxConditional.backend ($OP_TXPROXY_SENDRAWTXCONDITIONAL_BACKENDS)` - - - Per the [specification](/stack/features/send-raw-transaction-conditional), conditional transactions are not gossiped between peers. Thus, if you use replicas in an active/passive sequencer setup, this request must be broadcasted to all replicas. - - [proxyd](./proxyd) as an egress router for this method supports this broadcasting functionality. - - -## How it works - -To start using `op-txproxy`, follow these steps: - - - ### Build the binary or pull the Docker image - - 1. Run the following command to build the binary - ```bash - make build - ``` - 2. This will build and output the binary under `/bin/op-txproxy`. - - The image for this binary is also available as a [docker artifact](https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-txproxy). - - ### Configure - - The binary accepts configuration through CLI flags, which also settable via environment variables. Either set the flags explicitly when starting the binary or set the environment variables of the host starting the proxy. - - See [methods](#methods) on the configuration options available for each method. - - ### Start - - start the service with the following command - - ```bash - op-txproxy // ... with flags if env variables are not set - ``` - diff --git a/pages/builders/chain-operators/tools/proxyd.mdx b/pages/builders/chain-operators/tools/proxyd.mdx deleted file mode 100644 index e2b7cf6ef..000000000 --- a/pages/builders/chain-operators/tools/proxyd.mdx +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: proxyd -lang: en-US -description: Learn about the proxyd service and how to configure it for use in the OP Stack. ---- - -import { Steps } from 'nextra/components' - -# proxyd - -`proxyd` is an important RPC request router and proxy used within the OP Stack infrastructure. It enables operators to efficiently route and manage RPC requests across multiple backend services, ensuring performance, fault tolerance, and security. - -## Key features -* RPC method whitelisting -* Backend request routing -* Automatic retries for failed backend requests -* Consensus tracking (latest, safe, and finalized blocks) -* Request/response rewriting to enforce consensus -* Load balancing across backend services -* Caching of immutable responses -* Metrics for request latency, error rates, and backend health - -## How it works - -To start using `proxyd`, follow these steps: - - - ### **Build the binary**: - - * Run the following command to build the `proxyd` binary: - ```bash - make proxyd - ``` - * This will build the `proxyd` binary. No additional dependencies are required. - - ### **Configure `proxyd`**: - - * Create a configuration file to define your proxy backends and routing rules. - * Refer to [example.config.toml](https://github.com/ethereum-optimism/infra/blob/main/proxyd/example.config.toml) for a full list of options with commentary. - - ### **Start the service**: - - Once the configuration file is ready, start the `proxyd` service using the following command: - - ```bash - proxyd - ``` - - -## Consensus awareness - -Version 4.0.0 and later include consensus awareness to minimize chain reorganizations. - -Set `consensus_aware` to `true` in the configuration to enable: - -* Polling backends for consensus data (latest block, safe block, peer count, etc.). -* Resolving consensus groups based on healthiest backends -* Enforcing consensus state across client requests - -## Caching and metrics - -### Cacheable methods - -Certain immutable methods, such as `eth_chainId` and `eth_getBlockByHash`, can be cached using Redis to optimize performance. - -### Metrics - -Extensive metrics are available to monitor request latency, error rates, backend health, and more. These can be configured via `metrics.port` and `metrics.host` in the configuration file. - -## Next steps - -* Read about the [OP Stack chain architecture](/builders/chain-operators/architecture). -* Find out how you can support [snap sync](/builders/chain-operators/management/snap-sync). - on your chain. -* Find out how you can utilize [blob space](/builders/chain-operators/management/blobs) - to reduce the transaction fee cost on your chain. diff --git a/pages/builders/chain-operators/tutorials.mdx b/pages/builders/chain-operators/tutorials.mdx deleted file mode 100644 index 6b2387f7a..000000000 --- a/pages/builders/chain-operators/tutorials.mdx +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: Tutorials -lang: en-US -description: >- - Learn about tutorials in the Optimism ecosystem. This guide provides detailed - information and resources about tutorials. ---- - -import { Card, Cards } from 'nextra/components' - -# Tutorials - -This section provides information on adding attributes to the derivation function, adding a precompile, creating your own l2 rollup testnet, integrating a new da layer with alt da, modifying predeployed contracts and using viem. You'll find overview, tutorial, guide to help you understand and work with these topics. - - - - - - - - - - - - - - diff --git a/pages/builders/chain-operators/tutorials/_meta.json b/pages/builders/chain-operators/tutorials/_meta.json deleted file mode 100644 index c1122f8cc..000000000 --- a/pages/builders/chain-operators/tutorials/_meta.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "create-l2-rollup": "Creating your own rollup testnet", - "adding-derivation-attributes": "Adding attributes to the derivation function", - "adding-precompiles": "Adding a precompile", - "modifying-predeploys": "Modifying predeployed contracts", - "integrating-da-layer": "Integrating a new DA layer" -} \ No newline at end of file diff --git a/pages/builders/chain-operators/tutorials/adding-derivation-attributes.mdx b/pages/builders/chain-operators/tutorials/adding-derivation-attributes.mdx deleted file mode 100644 index 1bb3c9864..000000000 --- a/pages/builders/chain-operators/tutorials/adding-derivation-attributes.mdx +++ /dev/null @@ -1,276 +0,0 @@ ---- -title: Adding attributes to the derivation function -lang: en-US -description: Learn how to modify the derivation function for an OP Stack chain to track the amount of ETH being burned on L1. ---- - -import { Callout, Steps } from 'nextra/components' - -# Adding attributes to the derivation function - - - OP Stack Hacks are explicitly things that you can do with the OP Stack that are *not* currently intended for production use. - - OP Stack Hacks are not for the faint of heart. You will not be able to receive significant developer support for OP Stack Hacks — be prepared to get your hands dirty and to work without support. - - -## Overview - -In this tutorial, you'll modify the Bedrock Rollup. Although there are many ways to modify the OP Stack, you're going to spend this tutorial modifying the Derivation function. Specifically, you're going to update the Derivation function to track the amount of ETH being burned on L1! Who's gonna tell [ultrasound.money](http://ultrasound.money) that they should replace their backend with an OP Stack chain? - -## Getting the idea - -Let's quickly recap what you're about to do. The `op-node` is responsible for generating the Engine API payloads that trigger `op-geth` to produce blocks and transactions. The `op-node` already generates a "system transaction"for every L1 block that relays information about the current L1 state to the L2 chain. You're going to modify the `op-node` to add a new system transaction that reports the total burn amount (the base fee multiplied by the gas used) in each block. - -Although it might sound like a lot, the whole process only involves deploying a single smart contract, adding one new file to `op-node`, and modifying one existing file inside `op-node`. It'll be painless. Let's go! - -## Deploy the burn contract - -You're going to use a smart contract on your Rollup to store the reports that the `op-node` makes about the L1 burn. Here's the code for your smart contract: - -```solidity -// SPDX-License-Identifier: MIT -pragma solidity ^0.8.0; - -/** - * @title L1Burn - * @notice L1Burn keeps track of the total amount of ETH burned on L1. - */ -contract L1Burn { - /** - * @notice Total amount of ETH burned on L1. - */ - uint256 public total; - - /** - * @notice Mapping of blocks numbers to total burn. - */ - mapping (uint64 => uint256) public reports; - - /** - * @notice Allows the system address to submit a report. - * - * @param _blocknum L1 block number the report corresponds to. - * @param _burn Amount of ETH burned in the block. - */ - function report(uint64 _blocknum, uint64 _burn) external { - require( - msg.sender == 0xDeaDDEaDDeAdDeAdDEAdDEaddeAddEAdDEAd0001, - "L1Burn: reports can only be made from system address" - ); - - total += _burn; - reports[_blocknum] = total; - } - - /** - * @notice Tallies up the total burn since a given block number. - * - * @param _blocknum L1 block number to tally from. - * - * @return Total amount of ETH burned since the given block number; - */ - function tally(uint64 _blocknum) external view returns (uint256) { - return total - reports[_blocknum]; - } -} -``` - -Deploy this smart contract to your L2 (using any tool you find convenient). Make a note of the address that the contract is deployed to because you'll need it in a minute. Simple! - -## Add the burn transaction - -Now you need to add logic to the `op-node` to automatically submit a burn report whenever an L1 block is produced. Since this transaction is very similar to the system transaction that reports other L1 block info (found in [l1\_block\_info.go](https://github.com/ethereum-optimism/optimism/blob/c9cd1215b76111888e25ee27a49a0bc0c4eeb0f8/op-node/rollup/derive/l1_block_info.go)), you'll use that transaction as a jumping-off point. - - - ### Navigate to the `op-node` package: - - ```bash - cd ~/optimism/op-node - ``` - - ### Inside of the folder `rollup/derive`, create a new file called `l1_burn_info.go`: - - ```bash - touch rollup/derive/l1_burn_info.go - ``` - - ### Paste the following into `l1_burn_info.go`, and make sure to replace `YOUR_BURN_CONTRACT_HERE` with the address of the `L1Burn` contract you just deployed. - - ```go - package derive - - import ( - "bytes" - "encoding/binary" - "fmt" - "math/big" - - "github.com/ethereum/go-ethereum/common" - "github.com/ethereum/go-ethereum/core/types" - "github.com/ethereum/go-ethereum/crypto" - - "github.com/ethereum-optimism/optimism/op-node/eth" - ) - - const ( - L1BurnFuncSignature = "report(uint64,uint64)" - L1BurnArguments = 2 - L1BurnLen = 4 + 32*L1BurnArguments - ) - - var ( - L1BurnFuncBytes4 = crypto.Keccak256([]byte(L1BurnFuncSignature))[:4] - L1BurnAddress = common.HexToAddress("YOUR_BURN_CONTRACT_HERE") - ) - - type L1BurnInfo struct { - Number uint64 - Burn uint64 - } - - func (info *L1BurnInfo) MarshalBinary() ([]byte, error) { - data := make([]byte, L1BurnLen) - offset := 0 - copy(data[offset:4], L1BurnFuncBytes4) - offset += 4 - binary.BigEndian.PutUint64(data[offset+24:offset+32], info.Number) - offset += 32 - binary.BigEndian.PutUint64(data[offset+24:offset+32], info.Burn) - return data, nil - } - - func (info *L1BurnInfo) UnmarshalBinary(data []byte) error { - if len(data) != L1InfoLen { - return fmt.Errorf("data is unexpected length: %d", len(data)) - } - var padding [24]byte - offset := 4 - info.Number = binary.BigEndian.Uint64(data[offset+24 : offset+32]) - if !bytes.Equal(data[offset:offset+24], padding[:]) { - return fmt.Errorf("l1 burn tx number exceeds uint64 bounds: %x", data[offset:offset+32]) - } - offset += 32 - info.Burn = binary.BigEndian.Uint64(data[offset+24 : offset+32]) - if !bytes.Equal(data[offset:offset+24], padding[:]) { - return fmt.Errorf("l1 burn tx burn exceeds uint64 bounds: %x", data[offset:offset+32]) - } - return nil - } - - func L1BurnDepositTxData(data []byte) (L1BurnInfo, error) { - var info L1BurnInfo - err := info.UnmarshalBinary(data) - return info, err - } - - func L1BurnDeposit(seqNumber uint64, block eth.BlockInfo, sysCfg eth.SystemConfig) (*types.DepositTx, error) { - infoDat := L1BurnInfo{ - Number: block.NumberU64(), - Burn: block.BaseFee().Uint64() * block.GasUsed(), - } - data, err := infoDat.MarshalBinary() - if err != nil { - return nil, err - } - source := L1InfoDepositSource{ - L1BlockHash: block.Hash(), - SeqNumber: seqNumber, - } - return &types.DepositTx{ - SourceHash: source.SourceHash(), - From: L1InfoDepositerAddress, - To: &L1BurnAddress, - Mint: nil, - Value: big.NewInt(0), - Gas: 150_000_000, - IsSystemTransaction: true, - Data: data, - }, nil - } - - func L1BurnDepositBytes(seqNumber uint64, l1Info eth.BlockInfo, sysCfg eth.SystemConfig) ([]byte, error) { - dep, err := L1BurnDeposit(seqNumber, l1Info, sysCfg) - if err != nil { - return nil, fmt.Errorf("failed to create L1 burn tx: %w", err) - } - l1Tx := types.NewTx(dep) - opaqueL1Tx, err := l1Tx.MarshalBinary() - if err != nil { - return nil, fmt.Errorf("failed to encode L1 burn tx: %w", err) - } - return opaqueL1Tx, nil - } - ``` - - Feel free to take a look at this file if you're interested. It's relatively simple, mainly just defining a new transaction type and describing how the transaction should be encoded. - - -## Insert the burn transactions - -Finally, you'll need to update `~/optimism/op-node/rollup/derive/attributes.go` to insert the new burn transaction into every block. You'll need to make the following changes: - - - ### Find these lines: - - ```go - l1InfoTx, err := L1InfoDepositBytes(seqNumber, l1Info, sysConfig) - if err != nil { - return nil, NewCriticalError(fmt.Errorf("failed to create l1InfoTx: %w", err)) - } - ``` - - ### After those lines, add this code fragment: - - ```go - l1BurnTx, err := L1BurnDepositBytes(seqNumber, l1Info, sysConfig) - if err != nil { - return nil, NewCriticalError(fmt.Errorf("failed to create l1InfoTx: %w", err)) - } - ``` - - ### Immediately following, change these lines: - - ```go - txs := make([]hexutil.Bytes, 0, 1+len(depositTxs)) - txs = append(txs, l1InfoTx) - ``` - - to - - ```go - txs := make([]hexutil.Bytes, 0, 2+len(depositTxs)) - txs = append(txs, l1InfoTx) - txs = append(txs, l1BurnTx) - ``` - - All you're doing here is creating a new burn transaction after every `l1InfoTx` and inserting it into every block. - - -## Rebuild your op-node - -Before you can see this change take effect, you'll need to rebuild your `op-node`: - -```bash -cd ~/optimism/op-node -make op-node -``` - -Now start your `op-node` if it isn't running or restart your `op-node` if it's already running. You should see the change immediately — new blocks will contain two system transactions instead of just one! - -## Checking the result - -Query the `total` function of your contract, you should also start to see the total slowly increasing. Play around with the `tally` function to grab the amount of gas burned since a given L2 block. You could use this to implement a version of [ultrasound.money](http://ultrasound.money) that keeps track of things with an OP Stack as a backend. - -One way to get the total is to run these commands: - -```bash -export ETH_RPC_URL=http://localhost:8545 -cast call "total()" | cast --from-wei -``` - -## Conclusion - -With just a few tiny changes to the `op-node`, you were just able to implement a change to the OP Stack that allows you to keep track of the L1 ETH burn on L2. With a live Cannon Fault Proof System, you should not only be able to track the L1 burn on L2, you should be able to *prove* the burn to contracts back on L1. That's crazy! - -The OP Stack is an extremely powerful platform that allows you to perform a large amount of computation trustlessly. It's a superpower for smart contracts. Tracking the L1 burn is just one of the many, many wild things you can do with the OP Stack. If you're looking for inspiration or you want to see what others are building on the OP Stack, check out the OP Stack Hacks page. Maybe you'll find a project you want to work on, or maybe you'll get the inspiration you need to build the next killer smart contract. diff --git a/pages/builders/chain-operators/tutorials/adding-precompiles.mdx b/pages/builders/chain-operators/tutorials/adding-precompiles.mdx deleted file mode 100644 index 98b352c73..000000000 --- a/pages/builders/chain-operators/tutorials/adding-precompiles.mdx +++ /dev/null @@ -1,166 +0,0 @@ ---- -title: Adding a precompile -lang: en-US -description: Learn how to run an EVM with a new precompile for OP Stack chain operations to speed up calculations that are not currently supported. ---- - -import { Callout, Steps } from 'nextra/components' - -# Adding a precompile - - - OP Stack Hacks are explicitly things that you can do with the OP Stack that are *not* currently intended for production use. - - OP Stack Hacks are not for the faint of heart. You will not be able to receive significant developer support for OP Stack Hacks — be prepared to get your hands dirty and to work without support. - - -One possible use of OP Stack is to run an EVM with a new precompile for operations to speed up calculations that are not currently supported. In this tutorial, you'll make a simple precompile that returns a constant value if it's called with four or less bytes, or an error if it is called with more than that. - -To create a new precompile, the file to modify is [`op-geth/core/vm/contracts.go`](https://github.com/ethereum-optimism/op-geth/blob/optimism-history/core/vm/contracts.go). - - - ### Add to `PrecompiledContractsBerlin` on line 82 (or a later fork, if the list of precompiles changes again) - - * add a structure named after your new precompile, and - * use an address that is unlikely to ever clash with a standard precompile (0x100, for example): - - ```go - common.BytesToAddress([]byte{1,0}): &retConstant{}, - ``` - - ### Add the lines for the precompile - - ```go - type retConstant struct{} - - func (c *retConstant) RequiredGas(input []byte) uint64 { - return uint64(1024) - } - - var ( - errConstInvalidInputLength = errors.New("invalid input length") - ) - - func (c *retConstant) Run(input []byte) ([]byte, error) { - // Only allow input up to four bytes (function signature) - if len(input) > 4 { - return nil, errConstInvalidInputLength - } - - output := make([]byte, 6) - for i := 0; i < 6; i++ { - output[i] = byte(64+i) - } - return output, nil - } - ``` - - ### Stop `op-geth` and recompile: - - ```bash - cd ~/op-geth - make geth - ``` - - ### Restart `op-geth` - - ### Run these commands to see the result of calling the precompile successfully, and the result of an error: - - ```bash - cast call 0x0000000000000000000000000000000000000100 "whatever()" - cast call 0x0000000000000000000000000000000000000100 "whatever(string)" "fail" - ``` - - -## How does it work? - -This is the precompile interface definition: - -```go -type PrecompiledContract interface { - RequiredGas(input []byte) uint64 // RequiredPrice calculates the contract gas use - Run(input []byte) ([]byte, error) // Run runs the precompiled contract -} -``` - -It means that for every precompile you need two functions: - -* `RequiredGas` which returns the gas cost for the call. This function takes an array of bytes as input, and returns a single value, the gas cost. -* `Run` which runs the actual precompile. This function also takes an array of bytes, but it returns two values: the call's output (a byte array) and an error. - -For every fork that changes the precompiles you have a [`map`](https://www.w3schools.com/go/go_maps.php) from addresses to the `PrecompiledContract` definitions: - -```go -// PrecompiledContractsBerlin contains the default set of pre-compiled Ethereum -// contracts used in the Berlin release. -var PrecompiledContractsBerlin = map[common.Address]PrecompiledContract{ - common.BytesToAddress([]byte{1}): &ecrecover{}, - . - . - . - common.BytesToAddress([]byte{9}): &blake2F{}, - common.BytesToAddress([]byte{1,0}): &retConstant{}, -} -``` - -The key of the map is an address. You create those from bytes using `common.BytesToAddress([]byte{})`. In this case you have two bytes, `0x01` and `0x00`. Together you get the address `0x0…0100`. - -The syntax for a precompiled contract interface is `&{}`. - -The next step is to define the precompiled contract itself. - -```go -type retConstant struct{} -``` - -First you create a structure for the precompile. - -```go -func (c *retConstant) RequiredGas(input []byte) uint64 { - return uint64(1024) -} -``` - -Then you define a function as part of that structure. Here you just require a constant amount of gas, but of course the calculation can be a lot more sophisticated. - -```go -var ( - errConstInvalidInputLength = errors.New("invalid input length") -) - -``` - -Next you define a variable for the error. - -```go -func (c *retConstant) Run(input []byte) ([]byte, error) { -``` - -This is the function that actually executes the precompile. - -```go - - // Only allow input up to four bytes (function signature) - if len(input) > 4 { - return nil, errConstInvalidInputLength - } -``` - -Return an error if warranted. The reason this precompile allows up to four bytes of input is that any standard call (for example, using `cast`) is going to have at least four bytes for the function signature. - -`return a, b` is the way we return two values from a function in Go. The normal output is `nil`, nothing, because we return an error. - -```go - output := make([]byte, 6) - for i := 0; i < 6; i++ { - output[i] = byte(64+i) - } - return output, nil -} -``` - -Finally, you create the output buffer, fill it, and then return it. - -## Conclusion - -An OP Stack chain with additional precompiles can be useful, for example, to further reduce the computational effort required for cryptographic operations by moving them from interpreted EVM code to compiled Go code. diff --git a/pages/builders/chain-operators/tutorials/create-l2-rollup.mdx b/pages/builders/chain-operators/tutorials/create-l2-rollup.mdx deleted file mode 100644 index 5194e4c3d..000000000 --- a/pages/builders/chain-operators/tutorials/create-l2-rollup.mdx +++ /dev/null @@ -1,758 +0,0 @@ ---- -title: Creating your own L2 rollup testnet -lang: en-US -description: This tutorial walks you through spinning up an OP Stack testnet chain. ---- - -import { Callout, Steps } from 'nextra/components' -import { WipCallout } from '@/components/WipCallout' - - -# Creating your own L2 rollup testnet - - - - -Please **be prepared to set aside approximately one hour** to get everything running properly and **make sure to read through the guide carefully**. -You don't want to miss any important steps that might cause issues down the line. - - -This tutorial is **designed for developers** who want to learn about the OP Stack by spinning up an OP Stack testnet chain. -You'll walk through the full deployment process and teach you all of the components that make up the OP Stack, and **you'll end up with your very own OP Stack testnet**. - -It's useful to understand what each of these components does before -you start deploying your chain. To learn about the different components please -read the [deployment overview page](/builders/chain-operators/deploy/overview). - -You can use this testnet to experiment and perform tests, or you can choose to modify the chain to adapt it to your own needs. -**The OP Stack is free and open source software licensed entirely under the MIT license**. -You don't need permission from anyone to modify or deploy the stack in any configuration you want. - - -Modifications to the OP Stack may prevent a chain from being able to benefit from aspects of the [Optimism Superchain](/superchain/superchain-explainer). -Make sure to check out the [Superchain Explainer](/superchain/superchain-explainer) to learn more. - - -## Software dependencies - -| Dependency | Version | Version Check Command | -| ------------------------------------------------------------- | -------- | --------------------- | -| [git](https://git-scm.com/) | `^2` | `git --version` | -| [go](https://go.dev/) | `^1.21` | `go version` | -| [node](https://nodejs.org/en/) | `^20` | `node --version` | -| [pnpm](https://pnpm.io/installation) | `^8` | `pnpm --version` | -| [foundry](https://github.com/foundry-rs/foundry#installation) | `^0.2.0` | `forge --version` | -| [make](https://linux.die.net/man/1/make) | `^3` | `make --version` | -| [jq](https://github.com/jqlang/jq) | `^1.6` | `jq --version` | -| [direnv](https://direnv.net) | `^2` | `direnv --version` | - -### Notes on specific dependencies - -#### `node` - -We recommend using the latest LTS version of Node.js (currently v20). -[`nvm`](https://github.com/nvm-sh/nvm) is a useful tool that can help you manage multiple versions of Node.js on your machine. -You may experience unexpected errors on older versions of Node.js. - -#### `foundry` - -It's recommended to use the scripts in the monorepo's `package.json` for managing `foundry` to ensure you're always working with the correct version. This approach simplifies the installation, update, and version checking process. Make sure to clone the monorepo locally before proceeding. -#### `direnv` - -Parts of this tutorial use [`direnv`](https://direnv.net) as a way of loading environment variables from `.envrc` files into your shell. -This means you won't have to manually export environment variables every time you want to use them. -`direnv` only ever has access to files that you explicitly allow it to see. - -After [installing `direnv`](https://direnv.net/docs/installation.html), you will need to **make sure that [`direnv` is hooked into your shell](https://direnv.net/docs/hook.html)**. -Make sure you've followed [the guide on the `direnv` website](https://direnv.net/docs/hook.html), then **close your terminal and reopen it** so that the changes take effect (or `source` your config file if you know how to do that). - - -Make sure that you have correctly hooked `direnv` into your shell by modifying your shell configuration file (like `~/.bashrc` or `~/.zshrc`). -If you haven't edited a config file then you probably haven't configured `direnv` properly (and things might not work later). - - -## Get access to a sepolia node - -You'll be deploying a OP Stack Rollup chain that uses a Layer 1 blockchain to host and order transaction data. -The OP Stack Rollups were designed to use EVM Equivalent blockchains like Ethereum, OP Mainnet, or standard Ethereum testnets as their L1 chains. - -**This guide uses the Sepolia testnet as an L1 chain**. -We recommend that you also use Sepolia. -You can also use other EVM-compatible blockchains, but you may run into unexpected errors. -If you want to use an alternative network, make sure to carefully review each command and replace any Sepolia-specific values with the values for your network. - -Since you're deploying your OP Stack chain to Sepolia, you'll need to have access to a Sepolia node. -You can either use a node provider like [Alchemy](https://www.alchemy.com/) (easier) or run your own Sepolia node (harder). - -## Build the source code - -You're going to be spinning up your OP Stack chain directly from source code instead of using a container system like [Docker](https://www.docker.com/). -Although this adds a few extra steps, it means you'll have an easier time modifying the behavior of the stack if you'd like to do so. -If you want a summary of the various components you'll be using, take another look at the [What You're Going to Deploy](#what-youre-going-to-deploy) section above. - - -You're using the home directory `~/` as the work directory for this tutorial for simplicity. -You can use any directory you'd like but using the home directory will allow you to copy/paste the commands in this guide. -If you choose to use a different directory, make sure you're using the correct directory in the commands throughout this tutorial. - - -### Build the Optimism monorepo - - - -{

Clone the Optimism Monorepo

} - -```bash -cd ~ -git clone https://github.com/ethereum-optimism/optimism.git -``` - -{

Enter the Optimism Monorepo

} - -```bash -cd optimism -``` - -{

Check out the correct branch

} - - -You will be using the `tutorials/chain` branch of the Optimism Monorepo to deploy an OP Stack testnet chain during this tutorial. -This is a non-production branch that lags behind the `develop` branch. -You should **NEVER** use the `develop` or `tutorials/chain` branches in production. - - -```bash -git checkout tutorials/chain -``` - -{

Check your dependencies

} - - -Don't skip this step! Make sure you have all of the required dependencies installed before continuing. - - -Run the following script and double check that you have all of the required versions installed. -If you don't have the correct versions installed, you may run into unexpected errors. - -```bash -./packages/contracts-bedrock/scripts/getting-started/versions.sh -``` - -{

Install dependencies

} - -```bash -pnpm install -``` - -{

Build the various packages inside of the Optimism Monorepo

} - -```bash -make op-node op-batcher op-proposer -pnpm build -``` - -
- -### Build `op-geth` - - - -{

Clone op-geth

} - -```bash -cd ~ -git clone https://github.com/ethereum-optimism/op-geth.git -``` - -{

Enter op-geth

} - -```bash -cd op-geth -``` - -{

Build op-geth

} - -```bash -make geth -``` - -
- -## Fill out environment variables - -You'll need to fill out a few environment variables before you can start deploying your chain. - - - -{

Enter the Optimism Monorepo

} - -```bash -cd ~/optimism -``` - -{

Duplicate the sample environment variable file

} - -```bash -cp .envrc.example .envrc -``` - -{

Fill out the environment variable file

} - -Open up the environment variable file and fill out the following variables: - -| Variable Name | Description | -| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `L1_RPC_URL` | URL for your L1 node (a Sepolia node in this case). | -| `L1_RPC_KIND` | Kind of L1 RPC you're connecting to, used to inform optimal transactions receipts fetching. Valid options: `alchemy`, `quicknode`, `infura`, `parity`, `nethermind`, `debug_geth`, `erigon`, `basic`, `any`. | - -
- -## Generate addresses - -You'll need four addresses and their private keys when setting up the chain: - -* The `Admin` address has the ability to upgrade contracts. -* The `Batcher` address publishes Sequencer transaction data to L1. -* The `Proposer` address publishes L2 transaction results (state roots) to L1. -* The `Sequencer` address signs blocks on the p2p network. - - - -{

Enter the Optimism Monorepo

} - -```bash -cd ~/optimism -``` - -{

Generate new addresses

} - - -You should **not** use the `wallets.sh` tool for production deployments. -If you are deploying an OP Stack based chain into production, you should likely be using a combination of hardware security modules and hardware wallets. - - -```bash -./packages/contracts-bedrock/scripts/getting-started/wallets.sh -``` - -{

Check the output

} - -Make sure that you see output that looks something like the following: - -```text -Copy the following into your .envrc file: - -# Admin address -export GS_ADMIN_ADDRESS=0x9625B9aF7C42b4Ab7f2C437dbc4ee749d52E19FC -export GS_ADMIN_PRIVATE_KEY=0xbb93a75f64c57c6f464fd259ea37c2d4694110df57b2e293db8226a502b30a34 - -# Batcher address -export GS_BATCHER_ADDRESS=0xa1AEF4C07AB21E39c37F05466b872094edcf9cB1 -export GS_BATCHER_PRIVATE_KEY=0xe4d9cd91a3e53853b7ea0dad275efdb5173666720b1100866fb2d89757ca9c5a - -# Proposer address -export GS_PROPOSER_ADDRESS=0x40E805e252D0Ee3D587b68736544dEfB419F351b -export GS_PROPOSER_PRIVATE_KEY=0x2d1f265683ebe37d960c67df03a378f79a7859038c6d634a61e40776d561f8a2 - -# Sequencer address -export GS_SEQUENCER_ADDRESS=0xC06566E8Ec6cF81B4B26376880dB620d83d50Dfb -export GS_SEQUENCER_PRIVATE_KEY=0x2a0290473f3838dbd083a5e17783e3cc33c905539c0121f9c76614dda8a38dca -``` - -{

Save the addresses

} - -Copy the output from the previous step and paste it into your `.envrc` file as directed. - -{

Fund the addresses

} - -**You will need to send ETH to the `Admin`, `Proposer`, and `Batcher` addresses.** -The exact amount of ETH required depends on the L1 network being used. -**You do not need to send any ETH to the `Sequencer` address as it does not send transactions.** - -It's recommended to fund the addresses with the following amounts when using Sepolia: - -* `Admin` — 0.5 Sepolia ETH -* `Proposer` — 0.2 Sepolia ETH -* `Batcher` — 0.1 Sepolia ETH - -**To get the required Sepolia ETH to fund the addresses, we recommend using the [Superchain Faucet](https://console.optimism.io/faucet?utm_source=docs)** together with [Coinbase verification](https://help.coinbase.com/en/coinbase/getting-started/getting-started-with-coinbase/id-doc-verification). - -
- -## Load environment variables - -Now that you've filled out the environment variable file, you need to load those variables into your terminal. - - - -{

Enter the Optimism Monorepo

} - -```bash -cd ~/optimism -``` - -{

Load the variables with direnv

} - - -You're about to use `direnv` to load environment variables from the `.envrc` file into your terminal. -Make sure that you've [installed `direnv`](https://direnv.net/docs/installation.html) and that you've properly [hooked `direnv` into your shell](#configuring-direnv). - - -Next you'll need to allow `direnv` to read this file and load the variables into your terminal using the following command. - -```bash -direnv allow -``` - - -WARNING: `direnv` will unload itself whenever your `.envrc` file changes. -**You *must* rerun the following command every time you change the `.envrc` file.** - - -{

Confirm that the variables were loaded

} - -After running `direnv allow` you should see output that looks something like the following (the exact output will vary depending on the variables you've set, don't worry if it doesn't look exactly like this): - -```bash -direnv: loading ~/optimism/.envrc -direnv: export +DEPLOYMENT_CONTEXT +ETHERSCAN_API_KEY +GS_ADMIN_ADDRESS +GS_ADMIN_PRIVATE_KEY +GS_BATCHER_ADDRESS +GS_BATCHER_PRIVATE_KEY +GS_PROPOSER_ADDRESS +GS_PROPOSER_PRIVATE_KEY +GS_SEQUENCER_ADDRESS +GS_SEQUENCER_PRIVATE_KEY +IMPL_SALT +L1_RPC_KIND +L1_RPC_URL +PRIVATE_KEY +TENDERLY_PROJECT +TENDERLY_USERNAME -``` - -**If you don't see this output, you likely haven't [properly configured `direnv`](#configuring-direnv).** -Make sure you've configured `direnv` properly and run `direnv allow` again so that you see the desired output. - -
- -## Configure your network - -Once you've built both repositories, you'll need to head back to the Optimism Monorepo to set up the configuration file for your chain. -Currently, chain configuration lives inside of the [`contracts-bedrock`](https://github.com/ethereum-optimism/optimism/tree/v1.1.4/packages/contracts-bedrock) package in the form of a JSON file. - - - -{

Enter the Optimism Monorepo

} - -```bash -cd ~/optimism -``` - -{

Move into the contracts-bedrock package

} - -```bash -cd packages/contracts-bedrock -``` - -{

Generate the configuration file

} - -Run the following script to generate the `getting-started.json` configuration file inside of the `deploy-config` directory. - -```bash -./scripts/getting-started/config.sh -``` - -{

Review the configuration file (Optional)

} - -If you'd like, you can review the configuration file that was just generated by opening up `deploy-config/getting-started.json` in your favorite text editor. -It's recommended to keep this file as-is for now so you don't run into any unexpected errors. - -
- -## Deploy the Create2 factory (optional) - -If you're deploying an OP Stack chain to a network other than Sepolia, you may need to deploy a Create2 factory contract to the L1 chain. -This factory contract is used to deploy OP Stack smart contracts in a deterministic fashion. - - -This step is typically only necessary if you are deploying your OP Stack chain to custom L1 chain. -If you are deploying your OP Stack chain to Sepolia, you can safely skip this step. - - - - -{

Check if the factory exists

} - -The Create2 factory contract will be deployed at the address `0x4e59b44847b379578588920cA78FbF26c0B4956C`. -You can check if this contract has already been deployed to your L1 network with a block explorer or by running the following command: - -```bash -cast codesize 0x4e59b44847b379578588920cA78FbF26c0B4956C --rpc-url $L1_RPC_URL -``` - -If the command returns `0` then the contract has not been deployed yet. -If the command returns `69` then the contract has been deployed and you can safely skip this section. - -{

Fund the factory deployer

} - -You will need to send some ETH to the address that will be used to deploy the factory contract, `0x3fAB184622Dc19b6109349B94811493BF2a45362`. -This address can only be used to deploy the factory contract and will not be used for anything else. -Send at least 1 ETH to this address on your L1 chain. - -{

Deploy the factory

} - -Using `cast`, deploy the factory contract to your L1 chain: - -```bash -cast publish --rpc-url $L1_RPC_URL 0xf8a58085174876e800830186a08080b853604580600e600039806000f350fe7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffe03601600081602082378035828234f58015156039578182fd5b8082525050506014600cf31ba02222222222222222222222222222222222222222222222222222222222222222a02222222222222222222222222222222222222222222222222222222222222222 -``` - -{

Wait for the transaction to be mined

} - -Make sure that the transaction is included in a block on your L1 chain before continuing. - -{

Verify that the factory was deployed

} - -Run the code size check again to make sure that the factory was properly deployed: - -```bash -cast codesize 0x4e59b44847b379578588920cA78FbF26c0B4956C --rpc-url $L1_RPC_URL -``` - -
- -## Deploy the L1 contracts - -Once you've configured your network, it's time to deploy the L1 contracts necessary for the functionality of the chain. - - - -{

Deploy the L1 contracts

} - -```bash -forge script scripts/Deploy.s.sol:Deploy --private-key $GS_ADMIN_PRIVATE_KEY --broadcast --rpc-url $L1_RPC_URL --slow -``` - - -If you see a nondescript error that includes `EvmError: Revert` and `Script failed` then you likely need to change the `IMPL_SALT` environment variable. -This variable determines the addresses of various smart contracts that are deployed via [CREATE2](https://eips.ethereum.org/EIPS/eip-1014). -If the same `IMPL_SALT` is used to deploy the same contracts twice, the second deployment will fail. -**You can generate a new `IMPL_SALT` by running `direnv allow` anywhere in the Optimism Monorepo.** - - -
- -## Generate the L2 config files - -Now that you've set up the L1 smart contracts you can automatically generate several configuration files that are used within the Consensus Client and the Execution Client. - -You need to generate three important files: - -1. `genesis.json` includes the genesis state of the chain for the Execution Client. -2. `rollup.json` includes configuration information for the Consensus Client. -3. `jwt.txt` is a [JSON Web Token](https://jwt.io/introduction) that allows the Consensus Client and the Execution Client to communicate securely (the same mechanism is used in Ethereum clients). - - - -{

Navigate to the op-node package

} - -```bash -cd ~/optimism/op-node -``` - -{

Create genesis files

} - -Now you'll generate the `genesis.json` and `rollup.json` files within the `op-node` folder: - -```bash -go run cmd/main.go genesis l2 \ - --deploy-config ../packages/contracts-bedrock/deploy-config/getting-started.json \ - --l1-deployments ../packages/contracts-bedrock/deployments/getting-started/.deploy \ - --outfile.l2 genesis.json \ - --outfile.rollup rollup.json \ - --l1-rpc $L1_RPC_URL -``` - -{

Create an authentication key

} - -Next you'll create a [JSON Web Token](https://jwt.io/introduction) that will be used to authenticate the Consensus Client and the Execution Client. -This token is used to ensure that only the Consensus Client and the Execution Client can communicate with each other. -You can generate a JWT with the following command: - -```bash -openssl rand -hex 32 > jwt.txt -``` - -{

Copy genesis files into the op-geth directory

} - -Finally, you'll need to copy the `genesis.json` file and `jwt.txt` file into `op-geth` so you can use it to initialize and run `op-geth`: - -```bash -cp genesis.json ~/op-geth -cp jwt.txt ~/op-geth -``` - -
- -## Initialize `op-geth` - -You're almost ready to run your chain! -Now you just need to run a few commands to initialize `op-geth`. -You're going to be running a Sequencer node, so you'll need to import the `Sequencer` private key that you generated earlier. -This private key is what your Sequencer will use to sign new blocks. - - - -{

Navigate to the op-geth directory

} - -```bash -cd ~/op-geth -``` - -{

Create a data directory folder

} - -```bash -mkdir datadir -``` - -{

Build the op-geth binary

} - -```bash -make geth -``` - -{

Initialize op-geth

} - -```bash -build/bin/geth init --state.scheme=hash --datadir=datadir genesis.json -``` - -
- -## Start `op-geth` - -Now you'll start `op-geth`, your Execution Client. -Note that you won't start seeing any transactions until you start the Consensus Client in the next step. - - - -{

Open up a new terminal

} - -You'll need a terminal window to run `op-geth` in. - -{

Navigate to the op-geth directory

} - -```bash -cd ~/op-geth -``` - -{

Run op-geth

} - - -You're using `--gcmode=archive` to run `op-geth` here because this node will act as your Sequencer. -It's useful to run the Sequencer in archive mode because the `op-proposer` requires access to the full state. -Feel free to run other (non-Sequencer) nodes in full mode if you'd like to save disk space. Just make sure at least one other archive node exists and the `op-proposer` points to it. - - - -It's important that you've already initialized the geth node at this point as per the previous section. Failure to do this will cause startup issues between `op-geth` and `op-node`. - - -```bash -./build/bin/geth \ - --datadir ./datadir \ - --http \ - --http.corsdomain="*" \ - --http.vhosts="*" \ - --http.addr=0.0.0.0 \ - --http.api=web3,debug,eth,txpool,net,engine \ - --ws \ - --ws.addr=0.0.0.0 \ - --ws.port=8546 \ - --ws.origins="*" \ - --ws.api=debug,eth,txpool,net,engine \ - --syncmode=full \ - --gcmode=archive \ - --nodiscover \ - --maxpeers=0 \ - --networkid=42069 \ - --authrpc.vhosts="*" \ - --authrpc.addr=0.0.0.0 \ - --authrpc.port=8551 \ - --authrpc.jwtsecret=./jwt.txt \ - --rollup.disabletxpoolgossip=true -``` - -
- -## Start `op-node` - -Once you've got `op-geth` running you'll need to run `op-node`. -Like Ethereum, the OP Stack has a Consensus Client (`op-node`) and an Execution Client (`op-geth`). -The Consensus Client "drives" the Execution Client over the Engine API. - - - -{

Open up a new terminal

} - -You'll need a terminal window to run the `op-node` in. - -{

Navigate to the op-node directory

} - -```bash -cd ~/optimism/op-node -``` - -{

Run op-node

} - -```bash -./bin/op-node \ - --l2=http://localhost:8551 \ - --l2.jwt-secret=./jwt.txt \ - --sequencer.enabled \ - --sequencer.l1-confs=5 \ - --verifier.l1-confs=4 \ - --rollup.config=./rollup.json \ - --rpc.addr=0.0.0.0 \ - --p2p.disable \ - --rpc.enable-admin \ - --p2p.sequencer.key=$GS_SEQUENCER_PRIVATE_KEY \ - --l1=$L1_RPC_URL \ - --l1.rpckind=$L1_RPC_KIND -``` - -Once you run this command, you should start seeing the `op-node` begin to sync L2 blocks from the L1 chain. -Once the `op-node` has caught up to the tip of the L1 chain, it'll begin to send blocks to `op-geth` for execution. -At that point, you'll start to see blocks being created inside of `op-geth`. - - -**By default, your `op-node` will try to use a peer-to-peer to speed up the synchronization process.** -If you're using a chain ID that is also being used by others, like the default chain ID for this tutorial (42069), your `op-node` will receive blocks signed by other sequencers. -These requests will fail and waste time and network resources. -**To avoid this, this tutorial starts with peer-to-peer synchronization disabled (`--p2p.disable`).** - -Once you have multiple nodes, you may want to enable peer-to-peer synchronization. -You can add the following options to the `op-node` command to enable peer-to-peer synchronization with specific nodes: - -``` - --p2p.static= \ - --p2p.listen.ip=0.0.0.0 \ - --p2p.listen.tcp=9003 \ - --p2p.listen.udp=9003 \ -``` - -You can alternatively also remove the [--p2p.static](/builders/node-operators/configuration/consensus-config#p2pstatic) option, but you may see failed requests from other chains using the same chain ID. - - -
- -## Start `op-batcher` - -The `op-batcher` takes transactions from the Sequencer and publishes those transactions to L1. -Once these Sequencer transactions are included in a finalized L1 block, they're officially part of the canonical chain. -The `op-batcher` is critical! - -It's best to give the `Batcher` address at least 1 Sepolia ETH to ensure that it can continue operating without running out of ETH for gas. -Keep an eye on the balance of the `Batcher` address because it can expend ETH quickly if there are a lot of transactions to publish. - - - -{

Open up a new terminal

} - -You'll need a terminal window to run the `op-batcher` in. - -{

Navigate to the op-batcher directory

} - -```bash -cd ~/optimism/op-batcher -``` - -{

Run op-batcher

} - -```bash -./bin/op-batcher \ - --l2-eth-rpc=http://localhost:8545 \ - --rollup-rpc=http://localhost:9545 \ - --poll-interval=1s \ - --sub-safety-margin=6 \ - --num-confirmations=1 \ - --safe-abort-nonce-too-low-count=3 \ - --resubmission-timeout=30s \ - --rpc.addr=0.0.0.0 \ - --rpc.port=8548 \ - --rpc.enable-admin \ - --max-channel-duration=25 \ - --l1-eth-rpc=$L1_RPC_URL \ - --private-key=$GS_BATCHER_PRIVATE_KEY -``` - - -The [`--max-channel-duration=n`](/builders/chain-operators/configuration/batcher#set-your--op_batcher_max_channel_duration) setting tells the batcher to write all the data to L1 every `n` L1 blocks. -When it is low, transactions are written to L1 frequently and other nodes can synchronize from L1 quickly. -When it is high, transactions are written to L1 less frequently and the batcher spends less ETH. -If you want to reduce costs, either set this value to 0 to disable it or increase it to a higher value. - - -
- -## Start `op-proposer` - -Now start `op-proposer`, which proposes new state roots. - - - -{

Open up a new terminal

} - -You'll need a terminal window to run the `op-proposer` in. - -{

Navigate to the op-proposer directory

} - -```bash -cd ~/optimism/op-proposer -``` - -{

Run op-proposer

} - -```bash -./bin/op-proposer \ - --poll-interval=12s \ - --rpc.port=8560 \ - --rollup-rpc=http://localhost:9545 \ - --l2oo-address=$(cat ../packages/contracts-bedrock/deployments/getting-started/.deploy | jq -r .L2OutputOracleProxy) \ - --private-key=$GS_PROPOSER_PRIVATE_KEY \ - --l1-eth-rpc=$L1_RPC_URL -``` - -
- -## Connect your wallet to your chain - -You now have a fully functioning OP Stack Rollup with a Sequencer node running on `http://localhost:8545`. -You can connect your wallet to this chain the same way you'd connect your wallet to any other EVM chain. -If you need an easy way to connect to your chain, just [click here](https://chainid.link?network=opstack-getting-started). - -## Get ETH on your chain - -Once you've connected your wallet, you'll probably notice that you don't have any ETH to pay for gas on your chain. -The easiest way to deposit Sepolia ETH into your chain is to send ETH directly to the `L1StandardBridge` contract. - - - -{

Navigate to the contracts-bedrock directory

} - -```bash -cd ~/optimism/packages/contracts-bedrock -``` - -{

Get the address of the L1StandardBridgeProxy contract

} - -```bash -cat deployments/getting-started/.deploy | jq -r .L1StandardBridgeProxy -``` - -{

Send some Sepolia ETH to the L1StandardBridgeProxy contract

} - -Grab the L1 bridge proxy contract address and, using the wallet that you want to have ETH on your Rollup, send that address a small amount of ETH on Sepolia (0.1 or less is fine). -This will trigger a deposit that will mint ETH into your wallet on L2. -It may take up to 5 minutes for that ETH to appear in your wallet on L2. - -
- -## See your rollup in action - -You can interact with your Rollup the same way you'd interact with any other EVM chain. -Send some transactions, deploy some contracts, and see what happens! - -## Next steps - -* You can [modify the blockchain in various ways](../hacks/overview). -* Check out the [protocol specs](https://specs.optimism.io/) for more detail about the rollup protocol. -* If you run into any problems, please visit the [Chain Operators Troubleshooting Guide](../management/troubleshooting) for help. diff --git a/pages/builders/chain-operators/tutorials/integrating-da-layer.mdx b/pages/builders/chain-operators/tutorials/integrating-da-layer.mdx deleted file mode 100644 index 8627d4385..000000000 --- a/pages/builders/chain-operators/tutorials/integrating-da-layer.mdx +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: Integrating a new DA layer with Alt-DA -lang: en-US -description: Learn how to add support for a new DA Layer within the OP Stack. ---- - -import { Callout, Steps } from 'nextra/components' - -# Integrating a new DA layer with Alt-DA - - - The Alt-DA Mode feature is currently in Beta within the MIT-licensed OP Stack. Beta features are built and reviewed by Optimism Collective core contributors, and provide developers with early access to highly requested configurations. - These features may experience stability issues, and we encourage feedback from our early users. - - -[Alt-DA Mode](/stack/beta-features/alt-da-mode) enables seamless integration of any DA Layer, regardless of their commitment type, into the OP Stack. After a DA Server is built for a DA Layer, any chain operator can launch an OP Stack chain using that DA Layer for sustainably low costs. - -## Build your DA server - -Our suggestion is for every DA Layer to build and maintain their own DA Server, with support from the OP Labs team along the way. The DA Server will need to be run by every node operator, so we highly recommend making your DA Server open source and MIT licensed. - - - ### Design your commitment binary encoding - - * It must point to the data on your layer (like block height / hash). - * It must be able to validate the data returned from the data (i.e., include a cryptographic commitment to the data like a hash, merkle proof, or polynomial commitment, this could be done against the block hash with a complex proof). - - - See the [specs](https://specs.optimism.io/experimental/alt-da.html?highlight=input-commitment-submission#input-commitment-submission) for more info on commitment submission. - - - ### Claim your da\_layer byte - - * Claim your [byte](https://github.com/ethereum-optimism/specs/discussions/135) - - ### Implement the DA server - - * Write a simple HTTP server which supports `get` and `put` - * `put` is used by the batcher and can return the commitment to the batcher in the body. It should not return until the data is known to be submitted to your DA layer. - * `get` should fetch the data. If the data is not available, it should return a `404` not found. If there are other errors, a different error should be returned. - - - -## Run Alt-DA -Follow our guide on [how to operate an Alt-DA Mode chain](/builders/chain-operators/features/alt-da-mode), except instead of using the S3 DA server, use the DA server that you built. - -## Next steps - -* For more detail on implementing the DA Server, [see the specification](https://specs.optimism.io/experimental/alt-da.html#da-server). diff --git a/pages/builders/chain-operators/tutorials/modifying-predeploys.mdx b/pages/builders/chain-operators/tutorials/modifying-predeploys.mdx deleted file mode 100644 index 4391673c0..000000000 --- a/pages/builders/chain-operators/tutorials/modifying-predeploys.mdx +++ /dev/null @@ -1,86 +0,0 @@ ---- -title: Modifying predeployed contracts -lang: en-US -description: Learn how to modify predeployed contracts for an OP Stack chain by upgrading the proxy. ---- - -import { Callout, Steps } from 'nextra/components' - -# Modifying predeployed contracts - - - OP Stack Hacks are explicitly things that you can do with the OP Stack that are *not* currently intended for production use. - - OP Stack Hacks are not for the faint of heart. You will not be able to receive significant developer support for OP Stack Hacks — be prepared to get your hands dirty and to work without support. - - -OP Stack blockchains have a number of [predeployed contracts](https://github.com/ethereum-optimism/optimism/blob/129032f15b76b0d2a940443a39433de931a97a44/packages/contracts-bedrock/src/constants.ts) that provide important functionality. -Most of those contracts are proxies that can be upgraded using the `proxyAdminOwner` which was configured when the network was initially deployed. - -## Before you begin - -In this tutorial, you learn how to modify predeployed contracts for an OP Stack chain by upgrading the proxy. The predeploys are controlled from a predeploy called [`ProxyAdmin`](https://github.com/ethereum-optimism/optimism/blob/129032f15b76b0d2a940443a39433de931a97a44/packages/contracts-bedrock/contracts/universal/ProxyAdmin.sol), whose address is `0x4200000000000000000000000000000000000018`. -The function to call is [`upgrade(address,address)`](https://github.com/ethereum-optimism/optimism/blob/129032f15b76b0d2a940443a39433de931a97a44/packages/contracts-bedrock/contracts/universal/ProxyAdmin.sol#L205-L229). -The first parameter is the proxy to upgrade, and the second is the address of a new implementation. - -## Modify the legacy `L1BlockNumber` contract - -For example, the legacy `L1BlockNumber` contract is at `0x420...013`. -To disable this function, we'll set the implementation to `0x00...00`. -We do this using the [Foundry](https://book.getfoundry.sh/) command `cast`. - - - ### We'll need several constants. - - * Set these addresses as variables in your terminal. - - ```sh - L1BLOCKNUM=0x4200000000000000000000000000000000000013 - PROXY_ADMIN=0x4200000000000000000000000000000000000018 - ZERO_ADDR=0x0000000000000000000000000000000000000000 - ``` - - * Set `PRIVKEY` to the private key of your ADMIN address. - - * Set `ETH_RPC_URL`. If you're on the computer that runs the blockchain, use this command. - - ```sh - export ETH_RPC_URL=http://localhost:8545 - ``` - - ### Verify `L1BlockNumber` works correctly. - - See that when you call the contract you get a block number, and twelve seconds later you get the next one (block time on L1 is twelve seconds). - - ```sh - cast call $L1BLOCKNUM 'number()' | cast --to-dec - sleep 12 && cast call $L1BLOCKNUM 'number()' | cast --to-dec - ``` - - ### Get the current implementation for the contract. - - ```sh - L1BLOCKNUM_IMPLEMENTATION=`cast call $L1BLOCKNUM "implementation()" | sed 's/000000000000000000000000//'` - echo $L1BLOCKNUM_IMPLEMENTATION - ``` - - ### Change the implementation to the zero address - - ```sh - cast send --private-key $PRIVKEY $PROXY_ADMIN "upgrade(address,address)" $L1BLOCKNUM $ZERO_ADDR - ``` - - ### See that the implementation is address zero, and that calling it fails. - - ```sh - cast call $L1BLOCKNUM 'implementation()' - cast call $L1BLOCKNUM 'number()' - ``` - - ### Fix the predeploy by returning it to the previous implementation, and verify it works. - - ```sh - cast send --private-key $PRIVKEY $PROXY_ADMIN "upgrade(address,address)" $L1BLOCKNUM $L1BLOCKNUM_IMPLEMENTATION - cast call $L1BLOCKNUM 'number()' | cast --to-dec - ``` - diff --git a/pages/chain/testing/dev-node.mdx b/pages/chain/testing/dev-node.mdx index 7d22530e7..930662d20 100644 --- a/pages/chain/testing/dev-node.mdx +++ b/pages/chain/testing/dev-node.mdx @@ -182,7 +182,5 @@ Send some transactions, deploy some contracts, and see what happens! ## Next Steps -* You can [modify the blockchain in various ways](../../builders/chain-operators/hacks/overview). * Check out the [protocol specs](https://specs.optimism.io/) for more detail about the rollup protocol. -* If you run into any problems, please visit the [Chain Operators Troubleshooting Guide](../../builders/chain-operators/management/troubleshooting) - or [file an issue](https://github.com/ethereum-optimism/optimism/issues) for help. +* If you run into any problems, please visit the [Chain Operators Troubleshooting Guide](https://docs.optimism.io/builders/chain-operators/management/troubleshooting). diff --git a/pages/index.mdx b/pages/index.mdx index 90cbd7066..944990184 100644 --- a/pages/index.mdx +++ b/pages/index.mdx @@ -10,7 +10,7 @@ import { Cards, Card } from 'nextra/components' Welcome to Metal L2 Docs, the unified home of [Metal L2's](/connect/resources/glossary#metal-l2) technical documentation and information about the [OP Stack](/stack/getting-started) that Metal L2 is built on. -Information about Metal DAO's governance, community, and mission can be found on the [Metal L2 Website](https://metall2.com) +Information about Metal DAO's governance, community, and mission can be found on the [Metal L2 Website](https://metall2.com) Whether you're a developer building an app on Metal L2 or a node operator running a Metal L2 node you'll find everything you need to get started right here. diff --git a/pages/stack/getting-started.mdx b/pages/stack/getting-started.mdx index 8be73a2aa..c8ae6317f 100644 --- a/pages/stack/getting-started.mdx +++ b/pages/stack/getting-started.mdx @@ -8,7 +8,7 @@ import { Callout } from 'nextra/components' # Getting started with the OP Stack -**The OP Stack is the standardized, shared, and open-source development stack that powers Optimism, maintained by the Optimism Collective.** +**The OP Stack is the standardized, shared, and open-source development stack that powers Metal L2, maintained by the Optimism Collective.** Stay up to date on the Superchain and the OP Stack by subscribing to the [Optimism Developer Blog](https://blog.oplabs.co/) @@ -18,16 +18,16 @@ The OP Stack consists of the many different software components managed and main The OP Stack is built as a public good for the Ethereum and Optimism ecosystems. To understand how to operate an OP Stack chain, including roll-up and chain deployment basics, visit [Chain Operator guide](/builders/chain-operators/self-hosted). Check out these guides to get an overview of everything you need to know to properly support OP mainnet within your [exchange](/builders/app-developers/overview) and [wallet](/builders/app-developers/overview). -## The OP Stack powers Optimism +## The OP Stack powers Metal L2 -The OP Stack is the set of software that powers Optimism — currently in the form of the software behind OP Mainnet and eventually in the form of the Optimism Superchain and its governance. +The OP Stack is the set of software that powers Metal L2 — currently in the form of the software behind OP Mainnet and eventually in the form of the Optimism Superchain and its governance. With the advent of the Superchain concept, it has become increasingly important for Optimism to easily support the secure creation of new chains that can interoperate within the proposed Superchain ecosystem. As a result, the OP Stack is primarily focused around the creation of a shared, high-quality, and fully open-source system for creating new L2 blockchains. By coordinating on shared standards, the Optimism Collective can avoid rebuilding the same software in silos repeatedly. Although the OP Stack today significantly simplifies the process of creating L2 blockchains, it's important to note that this does not fundamentally define what the OP Stack **is**. -The OP Stack is *all* of the software that powers Optimism. +The OP Stack is *all* of the software that powers Metal L2. As Optimism evolves, so will the OP Stack. **The OP Stack can be thought of as software components that either help define a specific layer of the Optimism ecosystem or fill a role as a module within an existing layer.** diff --git a/pages/stack/security/faq.mdx b/pages/stack/security/faq.mdx index 267de6258..fd73ef329 100644 --- a/pages/stack/security/faq.mdx +++ b/pages/stack/security/faq.mdx @@ -14,7 +14,7 @@ import { Callout } from 'nextra/components' ## Security in the decentralized context -The OP Stack is a decentralized development stack that powers Optimism. Components of the OP Stack may be maintained by various different teams within the Optimism Collective. It is generally easier to talk about the security model of specific chains built on the OP Stack rather than the security model of the stack itself. **The OP Stack security baseline is to create safe defaults while still giving developers the flexibility to make modifications and extend the stack.** +The OP Stack is a decentralized development stack that powers Metal L2. Components of the OP Stack may be maintained by various different teams within the Optimism Collective. It is generally easier to talk about the security model of specific chains built on the OP Stack rather than the security model of the stack itself. **The OP Stack security baseline is to create safe defaults while still giving developers the flexibility to make modifications and extend the stack.** ## FAQ