You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment, the succinct operator follows a one-shot style command system. This experience is not seamless or efficient when it comes to make requests for proofs. To integrate well with DA, we need to introduce a system where we can automatically request proofs once certain criteria is met.
Why?
Consider the DA use case, where arbitrary users submit transactions, and those transactions need to be proven on ethereum. This requires the following steps:
submit transaction
request to prove
prove
relay to verifier
verify
emit event
If we do this for each blob submission, it's likely this is expensive. At the moment the verification circuit batches transactions/receipts to be proven in batches of 128. We can implement a queue mechanism which will drain the queue for proving on a few triggers:
when the queue is full
a configurable timeslot
hard override
For 2 and 3, we have to expand the queue with empty transactions, wasting some relay space. We do gain proving time, since the default proofs are ignored.
There is also an additional dimension where the light client head would need to be synced to support the latest proof. If we can somehow amortize this into one verification then that would also be ideal, until now, the queue should trigger a sync for the latest head, then drain the queue for proving, unless the queue items could be proven for the current head.
The text was updated successfully, but these errors were encountered:
Description
At the moment, the succinct operator follows a one-shot style command system. This experience is not seamless or efficient when it comes to make requests for proofs. To integrate well with DA, we need to introduce a system where we can automatically request proofs once certain criteria is met.
Why?
Consider the DA use case, where arbitrary users submit transactions, and those transactions need to be proven on ethereum. This requires the following steps:
If we do this for each blob submission, it's likely this is expensive. At the moment the verification circuit batches transactions/receipts to be proven in batches of 128. We can implement a queue mechanism which will drain the queue for proving on a few triggers:
For 2 and 3, we have to expand the queue with empty transactions, wasting some relay space. We do gain proving time, since the default proofs are ignored.
There is also an additional dimension where the light client head would need to be synced to support the latest proof. If we can somehow amortize this into one verification then that would also be ideal, until now, the queue should trigger a sync for the latest head, then drain the queue for proving, unless the queue items could be proven for the current head.
The text was updated successfully, but these errors were encountered: