-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
decoding: fix panics by limiting max decode sizes in proofs, commitments and assets #525
Conversation
proof/meta.go
Outdated
// Valid type, fall through. | ||
|
||
default: | ||
return ErrInvalidMetaType |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we want to still allow the other type, so this way people can start to experiment with how they should be defined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, with the code as is, adding a new meta reveal type would involve bumping the proof.Proof
version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I see the point about being able to add other types (without changing the proto or the proof version).
But then shouldn't we begin with a 1:1 mapping of the RPC enum to the native type so we don't need any switch
statements in any place? So basically changing MetaOpaque MetaType = 1
to MetaOpaque MetaType = 0
so it matches META_TYPE_OPAQUE = 0;
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, with the code as is, adding a new meta reveal type would involve bumping the proof.Proof version.
Why's that the case? No interpretation is actually needed. You just need to re-create the TLV serialization then hash. In reality, the type here is more of a client side thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, he meant because of the marshal and un-marshal functions that returned an error for unknown meta types. The way I solved it now (direct cast, no switch
statement) should allow users to use custom types.
proof/file.go
Outdated
// asset every 10 minutes for 1 year straight. This limitation might be | ||
// lifted at some point when proofs can be compressed into a single | ||
// zero-knowledge proof. | ||
FileMaxNumProofs = 52500 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think maybe we'll want a bit more breathing room here.
Mid-tern, we can also move to a route where we just keep them on disk and memory map, so then we never have to load the entire thing into user space memory at once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, makes sense. Bumped it to 420000 (roughly 8 years).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I imagine we won't write proof files for each update in an LN channel right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I don't think so. We'll just create a new single transition proof suffix for each update, and each update would be at the same "height" (same location in the proof file, so the file wouldn't be extended with every update).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I imagine we won't write proof files for each update in an LN channel right?
They'll get replaced basically each time. Both sides also have everything they need to make the proof files that result from on-chain channel operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm so if rewrite + update DB num proofs would be static 👌🏽
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great to see these checks implemented!
proof/meta.go
Outdated
// Valid type, fall through. | ||
|
||
default: | ||
return ErrInvalidMetaType |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, with the code as is, adding a new meta reveal type would involve bumping the proof.Proof
version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I appreciate the small commits. I don't think this is very close to being ready. I just have some concerns with MB vs MiB, but that's about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking pretty good! I think some of the new limits are missing unit tests + this PR could also include similar mitigation for address decoding:
taproot-assets/address/address.go
Line 369 in bf4af96
return stream.Decode(r) |
I think just using DecodeP2P
here is good enough?
Ah, good catch. Added! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking pretty good! Only large nit is to add test cases for the changes where we have custom limits vs. DecodeP2P()
. So TxMerkleProofRecord
with 16+ nodes, MetaRevealDataRecord
, AnchorTxRecord
, etc. Though the change is simple enough that maybe that's overkill and fuzzing time is good enough.
Ready to approve if so.
Looking really good! Only nit is now a separate proof size limit for RPC input. I think instead of adding that in decode it could actually be somewhere like |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reviewed the changes since my last review. All seem reasonable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed other places to limit proof sizes offline, these changes look good 🚀
Fixes #510.
Fixes #502.
Contains a commit (6174f9d) that is already in the
v0-3-0-staging
branch, but was incomplete, so I added itests for it as well.This PR fixes a number of proof and asset decoding related issues discovered through fuzz tests. Both
FuzzFile
andFuzzProof
were run for an hour each after the changes and they didn't discover any additional panics.To help any reviewer, here are all the magic numbers in one place:
We use
math.MaxUint16
for the number of previous witnesses and the number of TX witness elements (as well as the max length for each individual witness stack element).Everything else uses the
tlv.MaxRecordSize
(65k) size as that should fit all of those elements.cc @Crypt-iQ