Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade test path #172

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion protocol/l1-upgrades.md
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ This method will:
In order to avoid blocking this work, if the `FaultDisputeGame` and `PermissionedDisputeGame`
have not become MCP-L1 compatible in time, the OPCM upgrade path will deploy chain specific instances.

## Release process
## Upgrade process

As part of the release process, the associated implementation contracts and a new OPCM, will be deployed.

Expand Down
121 changes: 121 additions & 0 deletions solidity/20241126-upgrade-path-testing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Purpose

In order to safely implement the [L1 Upgrades design](../protocol/l1-upgrades.md)
the upgrade path should be tested, meaning that we should be able to:

1. start with a system matching the previous release
2. upgrade that system (using `OPCM.upgrade()`)
3. run tests against the upgraded system.

Put another way, we need two different methods for setting up the foundry tests which are
created by inheriting from `CommonTest`.

# Summary

<!-- Most (if not all) documents should have a summary.
While the length will likely be proportional to the length of the full document,
the summary should be as succinct as possible. -->

# Problem Statement + Context

Our test suite is currently not able to replicate the system from a previous release, which
prevents us from testing the upgrade path to the system currently under development.

As we're moving towards onchain upgrades via the OPCM, we want to be able to test that:

1. The upgrade works to move from the previous release to the system on `develop`.
2. The upgraded system passes the same set of unit test as the freshly deployed system on `develop`.

# Proposed Solution

## Deploying Superchain contracts with `op-deployer`

We propose to extend `op-deployer bootstrap` to enable deploying superchain contracts.

```shell
op-deployer bootstrap superchain <artifacts-locator> --outdir <outdir>
```

This command will write the artifacts to the outdir.

## Setting up the system to test

The current foundry testing uses the following high level call flow.

```mermaid
graph LR
A["CommonTest.setUp()"] --> B["Setup.L1()"] --> C["Deploy.run()"]
C --> D["DeployImplementations.run()"]
C --> E["DeploySuperchain.run()"]
C --> F["OPCM.deploy()"]
```

This is roughly what each component does:

- **`CommonTest.setUp()`:**
- Defines system config settings (ie. `useInterop`).
- provides reusable addresses (`alice` and `bob`)
- provides reusable generic contracts (`ERC20`)
- **`Setup.L1()`:**
- Reads addresses from the deployments file stored on disk
- labels addresses to make traces more readable
- ie.
```solidity
optimismPortal = IOptimismPortal(deploy.mustGetAddress("OptimismPortalProxy"));
vm.label(address(optimismPortal), "OptimismPortal");
```
- **`Deploy.run()`:**
- Deploys all necessary contracts via calls to `DeploySuperchain`, `DeployImplementations`.
- **`DeploySuperchain.run()`:**
- Deploys superchain contracts (`SuperchainConfig` and `ProtocolVersions`).
- **`DeployImplementations.run()`:**
- Deploys contracts (implementations and singletons) necessary for upgrading to the system on
`develop`.
- **`OPCM.deploy()`:**
- Deploys all proxies and bespoke singleton contracts as necessary for a new OP Chain.

This work would replace the `Deploy` script with a new `Upgrade` script, resulting in the following
call flow:

```mermaid
graph LR
A["CommonTest.setUp()"] --> B["Setup.L1()"] --> C["Upgrade.run()"]
C --> D["DeployImplementations.run()"]
C --> E["OPCM.upgrade()"]
```

A description of what new components would do is:

- **`Upgrade.run()`:**
- Calls `op-deployer bootstrap superchain` to deploy new superchain contracts (`SuperchainConfig`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is executing this command analogous to running both DeploySuperchain.s.sol and DeployImplementations.s.sol, but only having those scripts deploy contracts that changed from the last release? Don't need to get into implementation here, but one idea I've had for doing this is have those scripts use create2, precompute deploy addresses, and skip when code already exists. This requires a one-time redeploy of all contracts using this method, which we have to do for isthmus anyway since we're bumping to 0.8.25

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my mind this command would only do what DeploySuperchain.s.sol currently does (but for a given release).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, and op-deployer bootstrap opcm deploys the new implementations before deploying OPCM?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, afaict that is what it does, as it creates this output:

{
  "Opcm": "0xe3ef345391654121f385679613cea79a692c2dd8",
  "DelayedWETHImpl": "0x71e966ae981d1ce531a7b6d23dc0f27b38409087",
  "OptimismPortalImpl": "0xe2f826324b2faf99e513d16d266c3f80ae87832b",
  "PreimageOracleSingleton": "0x9c065e11870b891d214bc2da7ef1f9ddfa1be277",
  "MipsSingleton": "0x16e83ce5ce29bf90ad9da06d2fe6a15d5f344ce4",
  "SystemConfigImpl": "0xf56d96b2535b932656d3c04ebf51babff241d886",
  "L1CrossDomainMessengerImpl": "0xd3494713a5cfad3f5359379dfa074e2ac8c6fd65",
  "L1ERC721BridgeImpl": "0xae2af01232a6c4a4d3012c5ec5b1b35059caf10d",
  "L1StandardBridgeImpl": "0x64b5a5ed26dcb17370ff4d33a8d503f0fbd06cff",
  "OptimismMintableERC20FactoryImpl": "0xe01efbeb1089d1d1db9c6c8b135c934c0734c846",
  "DisputeGameFactoryImpl": "0xc641a33cab81c559f2bd4b21ea34c290e2440c2b"
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this introduce a circular dependency issue at all?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you say more about what you think that might be?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Originally OPCM and it's solidity tests were self-contained, and op-deployer wrapped it. Now, OPCM requires op-deployer for it's own testing. I am not familiar enough with the architecture to know if there is actually a problem here, and I don't have a better solution that feels less circular

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmmm, I think that this is OK because op-deployer is only being used to setup a system from a previous release, and then the testing will act directly on the OPCM and updated implementations which are on develop

and `ProtocolVersions`), corresponding to the previous release.
Comment on lines +88 to +89
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

corresponding to the previous release.

@mslipper This might be a gotcha I've miseed. ie. since op-deployer now only deploys a single release version, we'd need to use the previous release of op-deployer, not the one that exists on develop. Can we easily install and run a previous version?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As long as a tagged release exists that can deploy all the prerequisite contracts, then yes.

- Calls `op-deployer bootstrap opcm` to deploy release OPCM, corresponding to the previous release.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be merged with op-deployer bootstrap superchain? It seems like any time we want to deploy new superchain implementation contracts we'd also want to deploy a corresponding new OPCM

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong opinion here, maybe @mslipper does?

For the purposes of this design, I'm happy to make two op-deployer bootstrap calls or just the one if we do combine then.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No strong opinion either, happy to defer

- Calls to `DeployImplementations.run()` to deploy contracts necessary for upgrading to the system on `develop`.
- Parses the deployment output from `op-deployer` and writes to disk using the `Deploy.save()`
functions, so that `Setup.L1()` can read the deployment.
- **`OPCM.upgrade()`:**
- Upgrades proxies to new implementation contracts and bespoke singleton contracts as necessary for a new OP Chain.
- This flow is descibed in detail in the [L1 Upgrades design](../protocol/l1-upgrades.md#release-process).

This testing setup route would be indicated with a new `useUpgradedSystem` flag in `CommonTest`. The
new flag could only be enabled when other flags (`useAltDAOverride`, `useLegacyContracts`,
`useInteropOverride`, `customGasToken`) are disabled.

Note that this testing would need to be run against an `anvil` node, as `op-deployer bootstrap`
requires an RPC endpoint. I am not experienced with running `forge test` against an anvil node,
so appreciate any gotchas I might be missing.
Comment on lines +102 to +104
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I correct that the flow here would be for our forge tests to invoke op-deployer against an anvil RPC URL, then after op-deployer runs it would vm.createSelectFork(anvilRpcUrl)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

invoke op-deployer against an anvil RPC URL

Yes.

then after op-deployer runs it would vm.createSelectFork(anvilRpcUrl)?

I had envisioned calling forge test --fork-url anvilRpcUrl. Creating the fork in the script seems less error prone so I like this, are there any other differences I should be aware of between those two approaches?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had envisioned calling forge test --fork-url anvilRpcUrl. Creating the fork in the script seems less error prone so I like this

I am not certain but it's possible the forge test --fork-url anvilRpcUrl approach might not work because forge may always make requests at whatever the initial block is at fork time, i.e. I don't know if it follows the chain live, if that makes sense. So starting our fork from within the test/script itself after anvil is properly configured seems safer anyway

I don't think there should be any other differences. There are some createSelectFork things to be aware of if you switch between forks in the tests, like vm.makePersistent


# Alternatives Considered

An alternative considered was to add a new `op-deployer download` command to get artifacts, then
make minimal modifications to `DeploySuperchain` and `DeployImplementations` to deploy those
artifacts by providing branching logic to provide a different artifacts path to `vm.getCode()`.

The challenge with this approach is that `DeployImplementations` will need other changes from
release to release, so we would be in a position of needing branching logic to accomodate
at least the most recent release and current release in that script.

# Risks & Uncertainties

<!-- An overview of what could go wrong.
Also any open questions that need more work to resolve. -->