You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However the consensus specs define two independent variables that limit blobs in different ways. When possible we prefer not to modify consensus variables that would require custom code in consensus implementations, so the same codebase can work for Ethereum and Gnosis networks. This variables are:
variable
value
type
`MAX_BLOBS_PER_BLOCK
6
CL preset
`BLOB_SIDECAR_SUBNET_COUNT
6
CL config
So: Is it safe to have NOT reduce MAX_BLOBS_PER_BLOCK, BLOB_SIDECAR_SUBNET_COUNT?
Current usage
MAX_BLOBS_PER_BLOCK is used in state transition to enforce the maximum of blobs, independent of MAX_BLOB_GAS_PER_BLOCK
defprocess_execution_payload(state: BeaconState, body: BeaconBlockBody, execution_engine: ExecutionEngine) ->None:
...
# [New in Deneb:EIP4844] Verify commitments are under limitassertlen(body.blob_kzg_commitments) <=MAX_BLOBS_PER_BLOCK
MAX_BLOBS_PER_BLOCK is used in ReqResp to compute MAX_REQUEST_BLOB_SIDECARS = MAX_REQUEST_BLOCKS_DENEB * MAX_BLOBS_PER_BLOCK. MAX_REQUEST_BLOB_SIDECARS is used to limit /eth2/beacon_chain/req/blob_sidecars_by_root/1/ and /eth2/beacon_chain/req/blob_sidecars_by_range/1/ protocol requests:
The response MUST contain no more than count * MAX_BLOBS_PER_BLOCK blob sidecars.
MAX_BLOBS_PER_BLOCK is used in gossip topic beacon_block to limit the count of blob_kzg_commitments:
[REJECT] The length of KZG commitments is less than or equal to the limitation defined in Consensus Layer -- i.e. validate that len(body.signed_beacon_block.message.blob_kzg_commitments) <= MAX_BLOBS_PER_BLOCK
MAX_BLOBS_PER_BLOCK is used in gossip topic blob_sidecar_{subnet_id} to upper bound the index field:
[REJECT] The sidecar's index is consistent with MAX_BLOBS_PER_BLOCK -- i.e. blob_sidecar.index < MAX_BLOBS_PER_BLOCK.
BLOB_SIDECAR_SUBNET_COUNT is used to compute at which subnet each block_sidecar.index has to be published to:
Consequences of MAX_BLOBS_PER_BLOCK > MAX_BLOB_GAS_PER_BLOCK:
A block with too much commitments will be accepted by this initial consensus condition. If the block has too much blobs it will be rejected by execution validation. If the length of versioned_hashes does not match the count of blobs, the block is rejected.
3x more blobs may be requested and returned. Slightly higher bandwidth cost to nodes but not a concern.
A block that will become invalid on 2) may be initially accepted and propagated. Slightly higher DoS vector but not a concern?
One could publish a blob sidecar with index > 2, and it will pass the condition highlighted in 5). However to pass the second condition of the topic, the actual index in the blob sidecar has to also be > 2. Then the condition checking verify_blob_sidecar_inclusion_proof would fail unless the proposer has published a block that includes too many blobs. The proposer can temporarily increase the count of blobs being propagated in the network from 2 to 6 at the expense of losing a block proposal.
Nodes may subscribe to 4 subnet topics that will never publish or broadcast any messages. Could be a problem for scoring? No, since blob publication is a function of demand, nodes can't expect a specific throughput per subnet. Thus a situation where a blob is never propagated on subnet 3 is indistinguishable from a network period of no blob activity.
Conclusions
Not reducing MAX_BLOBS_PER_BLOCK will allow to broadcast invalid data over p2p, but at the cost of the proposer not having its block accepted. There are not other concerns.
The text was updated successfully, but these errors were encountered:
My main concern was that invalid blocks (blocks having kzg_commitments.len() > 2 ) would continue getting gossiped since the gossip conditions are still satisfied.
Here, the worst thing an attacker can do is make nodes verify and forward invalid blocks/blobs. But this isn't really a DoS vector since the disproportionate cost of this is the attacker missing the proposal slot.
I'm trying to think of some fork choice attacks here that takes advantage of the fact that invalid blocks/blobs are gossiped around the network but cannot think of anything
cc @realbigsean
Gnosis has reduced the
MAX_BLOB_GAS_PER_BLOCK = 2 blobs
EIP-4844 param due to its faster slot times.specs/network-upgrades/dencun.md
Line 35 in a249f7d
However the consensus specs define two independent variables that limit blobs in different ways. When possible we prefer not to modify consensus variables that would require custom code in consensus implementations, so the same codebase can work for Ethereum and Gnosis networks. This variables are:
So: Is it safe to have NOT reduce MAX_BLOBS_PER_BLOCK, BLOB_SIDECAR_SUBNET_COUNT?
Current usage
MAX_BLOBS_PER_BLOCK
is used in state transition to enforce the maximum of blobs, independent ofMAX_BLOB_GAS_PER_BLOCK
MAX_BLOBS_PER_BLOCK
is used in ReqResp to computeMAX_REQUEST_BLOB_SIDECARS
=MAX_REQUEST_BLOCKS_DENEB
*MAX_BLOBS_PER_BLOCK
.MAX_REQUEST_BLOB_SIDECARS
is used to limit/eth2/beacon_chain/req/blob_sidecars_by_root/1/
and/eth2/beacon_chain/req/blob_sidecars_by_range/1/
protocol requests:MAX_BLOBS_PER_BLOCK
is used in gossip topicbeacon_block
to limit the count ofblob_kzg_commitments
:MAX_BLOBS_PER_BLOCK
is used in gossip topicblob_sidecar_{subnet_id}
to upper bound the index field:BLOB_SIDECAR_SUBNET_COUNT
is used to compute at which subnet each block_sidecar.index has to be published to:Consequences of
MAX_BLOBS_PER_BLOCK > MAX_BLOB_GAS_PER_BLOCK
:A block with too much commitments will be accepted by this initial consensus condition. If the block has too much blobs it will be rejected by execution validation. If the length of versioned_hashes does not match the count of blobs, the block is rejected.
3x more blobs may be requested and returned. Slightly higher bandwidth cost to nodes but not a concern.
A block that will become invalid on 2) may be initially accepted and propagated. Slightly higher DoS vector but not a concern?
One could publish a blob sidecar with index > 2, and it will pass the condition highlighted in 5). However to pass the second condition of the topic, the actual index in the blob sidecar has to also be > 2. Then the condition checking
verify_blob_sidecar_inclusion_proof
would fail unless the proposer has published a block that includes too many blobs. The proposer can temporarily increase the count of blobs being propagated in the network from 2 to 6 at the expense of losing a block proposal.Nodes may subscribe to 4 subnet topics that will never publish or broadcast any messages. Could be a problem for scoring? No, since blob publication is a function of demand, nodes can't expect a specific throughput per subnet. Thus a situation where a blob is never propagated on subnet 3 is indistinguishable from a network period of no blob activity.
Conclusions
Not reducing
MAX_BLOBS_PER_BLOCK
will allow to broadcast invalid data over p2p, but at the cost of the proposer not having its block accepted. There are not other concerns.The text was updated successfully, but these errors were encountered: