-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WEB3-171: OP Steel #267
base: main
Are you sure you want to change the base?
WEB3-171: OP Steel #267
Conversation
940864d
to
58f0fb1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks quite good so far! It's a testament to your work on Steel up to this point that there are not more changes required and that the logic layers on rather nicely 👌
crates/op-steel/Cargo.toml
Outdated
@@ -0,0 +1,35 @@ | |||
[package] | |||
name = "op-steel" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following our naming convention, this would be risc0-op-steel
, although I can certainly see an argument that this naming convention is bad.
name = "op-steel" | |
name = "risc0-op-steel" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adressed in c280ce1
|
||
/// @notice Validates if the Steel commitment. | ||
/// @return True if the commitment is valid, false otherwise. | ||
function validateCommitment(Steel.Commitment memory commitment) internal view returns (bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I waffled on this a bit, but eventually decided to go with a pattern of reverting when invalid (and having void
return) rather than returning false
. I think this is better in that it's harder to misuse, and it allows us to revert with a specific error instead of having the caller need to revert instead. We can add a tryValidateCommitment
for the case that the caller wants a bool
back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally, I am really not a fan of custom errors: Error handling in Solidity is difficult and it gets even worse in alloy
and steel
where you just get a cryptic hex string and it is super tedious to figure out the actual cause.
I also tried to look at OpenZepplin and their verification function also usually returns a bool
and then it is up to the caller to do something with it.
|
||
import {Encoding, Steel} from "./Steel.sol"; | ||
|
||
abstract contract OpCommitmentValidator { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, to be clear, this is the contract deployed on L1 to validate queries against OP L2 data, ya? It's always hard to keep these things straight (versus verifying on L2 querying L1). Adding a docstring contract itself to state this inline might be helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am really unsure what the best Solidity design would be:
Ideally, I wanted a wrapper (providing the same validateCommitment
function) around the Steel library that can also validate the new OP commitment (to verify an L2 proof on L1) unfortunately this requires the address of the OptimismPortal2 contract.
Alternatively, OpSteel
could be a library only providing validateOpCommitment(IOptimismPortal2 optimismPortal, Steel.Commitment memory commitment)
which fails if commitment
does not have the correct version.
Or have this an (abstract
) contract
to store the address but only provide the above method.
/// @return True if the commitment is valid, false otherwise. | ||
function validateCommitment(Steel.Commitment memory commitment) internal view returns (bool) { | ||
(uint240 blockID, uint16 version) = Encoding.decodeVersionedID(commitment.id); | ||
if (version == 0x100) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What can be do to reduce the chance of version number collisions causing issues as we expand support? Should we perhaps do something more like having a registry, or perhaps making is a "selector" that is resistant to accidental collisions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, I like this idea. We could use sha256("Beacon")
or something like that for the selector. However, I see two complications:
- The version is only a
uint16
, with only two bytes there could be accidental collisions. (However, it might be possible to change this to auint32
without breaking anything.) - Currently we have
0x00
,0x01
for block and beacon. So I guess it would be a breaking change if we changed this to a selector as well, and using a selector only for other types is also weird.
examples/op/Cargo.toml
Outdated
# Always optimize; otherwise tests take excessively long. | ||
[profile.dev] | ||
opt-level = 3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this true? I've been starting to remove this, since I think in many cases it's an artifact from when the prover was built directly instead of using the installed r0vm
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's just a copy&paste artifact from other examples. If it is not needed, even better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adressed in 729284a
view | ||
returns (bytes memory payload) | ||
{ | ||
Steel.Commitment memory commitment = abi.decode(journal[:96], (Steel.Commitment)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
abi.decode(journal[:96], (Steel.Commitment))
seems somewhat fragile. I made a comment in the guest about committing a tuple instead and decoding a tuple here. I think that'd be a better pattern to show in the example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The point of this verifier was to have a generic verifier that could work with any journal.
If the journal is changed from two independent slices to a single struct or tuple, this would no longer work and the Verifier.sol would need to know the exact type of the journal. And that is why I liked this approach.
After all, it turns out that each of the currently implemented examples returns the exact same type, so we could indeed implement a common journal struct. However, as soon as we add a new example with a different type, things would not be so nice.
|
||
import {Encoding, Steel} from "./Steel.sol"; | ||
|
||
abstract contract OpCommitmentValidator { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why make this an abstract
contract? If concrete, it would still be possible to inherit from, while also being possible to deploy and call as an external dependency. Although they are internal
now, the validation functions seem like they'd be reasonable to make public
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of encouraging people to inherit the OpCommitmentValidator
, as it would be closer to the library we already have for steel.
I am not sure if there is a real demand to make this contract publicly available, as the actual validation code is rather small and simple, and calling an external contract would increase the cost of gas.
let returns = contract.call_builder(&CALL).from(CALLER).call(); | ||
|
||
// Commit the result. | ||
env::commit(&returns._0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd be a better pattern to use something like encoding a tuple or another lightweight data structuring to combine the data and commit with one call, rather than calling commit twice. Concatenating data by calling commit more than once seems more error prone, potentially resulting in confusing decoding issues (and potentially security bugs if it ends up producing a byte-string with multiple valid decodings).
env::commit(&returns._0) | |
env::commit_slice((&env.commitment(), &returns._0).abi_encode()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is a generic way to commit to the commitment and return value: to decode it, we would always need to know the exact type of the return values. This is why I used two commit statements.
Currently, the only alternatives I see are
- not commit to the return at all (since they are not used in the examples anyway)
- implement a Verifier.sol for each example (return type)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we drop this guest and just have an l1
guest and an l2
guest? It would kind of make sense to me to have the guest be dependent only on where the chain being queried, and not on the chain it's being verified on. With the way this is written, it seems like this is actually possible as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could go even further potentially if we start checking the configID
in the verification process. We could allow the host to specify the config (e.g. by giving a chain ID which is used to lookup the config from a lazily computed table). With that, a single guest could query multiple chains, and the verifier would check which chain was actually queried (e.g. by comparing it to a constant on that contract).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a general software design point of view, I agree 100%. However, for examples, it might make sense to think of the examples more as a self-contained uint that contains everything.
Each current steel example is even its own workspace for this reason.
Overall, I am not sure what the best example design would be, for the OP examples I have somehow tried to follow the r0 examples, which are all in a common crate but each contain their own guest code.
alloy::sol!( | ||
#[sol(rpc)] | ||
Verifier, | ||
"../contracts/out/Verifier.sol/Verifier.json" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, this only works after a forge build
, which means that this would also be required before any cargo clippy --all-features
or cargo test --all-features
. I can see the following solutions:
- Replace
--all-features
with the actual features likehost
. - Add a "dummy"
Verifier.json
to the repo and usecfg
flags to use this when clippy is executed. - Run
forge build
before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I now added a build script that always calls forge build when the verification is needed.
Not perfect, but might be a good compromise.
// The game must have been resolved for at least `proofMaturityDelaySeconds`. | ||
if (block.timestamp - game.resolvedAt() <= optimismPortal.proofMaturityDelaySeconds()) return false; | ||
// The game must not be blacklisted. | ||
if (optimismPortal.disputeGameBlacklist(game)) return false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like the OP design here because it makes validateDisputeGameCommitment
non-monotonic:
-
disputeGameBlacklist
only returns a boolean, not a timestamp - If the guardian blacklists a game long after its maturity delay at time
$ts$ ,validateDisputeGameCommitment
will succeed before$ts$ , but then suddenly fail when rerunning on the same input after$ts$
I don't see a good fix for this. I guess removing the disputeGameBlacklist
check would even lead to more inconsistencies.
crates/steel/Cargo.toml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess if we eventually have more l3-steel crates, it makes a lot of sense to have them in a common subdirectory instead of hiding the main dir. However, with only two crates at the moment, it does not matter yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All OP examples are in a common workspace to simplify version consistency. However, the individual examples are no longer strictly self-contained; if a user wants to use an example as a starting point, they can no longer just copy the example folder.
use risc0_zkvm::{sha::Digest, Receipt}; | ||
use url::Url; | ||
|
||
pub async fn verify_on_chain( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The common method gives us an easy way to turn any example into an e2e test:
- It deploys the actual verifier contract on the forked target chain, so we can then do an
eth_call
against it to check that the proof and verification library actually works.
This has the following drawbacks:
- It adds code to the examples that is not relevant to a user
- Verification requires calls to a public RPC provider, if they are down the tests will fail.
Is this worth it?
env: | ||
RUST_LOG: "info,risc0_steel=debug,risc0_op_steel=debug" | ||
RISC0_DEV_MODE: true | ||
L1_RPC_URL: https://eth-mainnet.g.alchemy.com/v2/${{ secrets.ALCHEMY_RISC0_ETH_API_KEY }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we run the CI against the actual live network, an alternative would be be to spin up a local docker devnet containing at least an L1, L2, and Beacon node
[package] | ||
name = "risc0-op-steel" | ||
description = "Optimism abstraction for Steel." | ||
version = "0.1.0-alpha.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is a good versioning for op-steel?
Should we also mark it as unstable or something.
alloy::sol!( | ||
#[sol(rpc)] | ||
Verifier, | ||
"../contracts/out/Verifier.sol/Verifier.json" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I now added a build script that always calls forge build when the verification is needed.
Not perfect, but might be a good compromise.
println!("foundry root: {}", foundry_root.display()); | ||
|
||
// Make sure the Verifier.sol file is always compiled. | ||
let status = Command::new("forge") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assumes that foundry is installed for each cargo build
This PR adds the
op-steel
crate, which provides wrappers and utilities to run Steel on OP and do cross-domain proofing.