diff --git a/404.html b/404.html index b27d4972..7e0a0b48 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index 23ef53dc..d5ee1fe4 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index 2ff0162d..87e38da3 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index f2f16782..d2c3f583 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index 355a1405..7bf3d801 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0010-burn-coretime-revenue.html b/approved/0010-burn-coretime-revenue.html index ff966812..74785b4b 100644 --- a/approved/0010-burn-coretime-revenue.html +++ b/approved/0010-burn-coretime-revenue.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index 079f5780..38bf769d 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html index 32fc354c..76c7cd2c 100644 --- a/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html +++ b/approved/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 55941676..428fd2d6 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index 4a9e7c62..70e04469 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0026-sassafras-consensus.html b/approved/0026-sassafras-consensus.html index adbaa5ac..5639e618 100644 --- a/approved/0026-sassafras-consensus.html +++ b/approved/0026-sassafras-consensus.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index b9693900..15cd27e0 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0042-extrinsics-state-version.html b/approved/0042-extrinsics-state-version.html index 242777ac..46f96f36 100644 --- a/approved/0042-extrinsics-state-version.html +++ b/approved/0042-extrinsics-state-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0043-storage-proof-size-hostfunction.html b/approved/0043-storage-proof-size-hostfunction.html index bb8b3c36..a6c74723 100644 --- a/approved/0043-storage-proof-size-hostfunction.html +++ b/approved/0043-storage-proof-size-hostfunction.html @@ -90,7 +90,7 @@ diff --git a/approved/0045-nft-deposits-asset-hub.html b/approved/0045-nft-deposits-asset-hub.html index 0d2c7c4f..9c801605 100644 --- a/approved/0045-nft-deposits-asset-hub.html +++ b/approved/0045-nft-deposits-asset-hub.html @@ -90,7 +90,7 @@ diff --git a/approved/0047-assignment-of-availability-chunks.html b/approved/0047-assignment-of-availability-chunks.html index 71357dcc..2b9b9cf2 100644 --- a/approved/0047-assignment-of-availability-chunks.html +++ b/approved/0047-assignment-of-availability-chunks.html @@ -90,7 +90,7 @@ diff --git a/approved/0048-session-keys-runtime-api.html b/approved/0048-session-keys-runtime-api.html index 54d774b7..e202ab9e 100644 --- a/approved/0048-session-keys-runtime-api.html +++ b/approved/0048-session-keys-runtime-api.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index 48efb971..83ba44c6 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index 1add6862..aaab4048 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ diff --git a/approved/0059-nodes-capabilities-discovery.html b/approved/0059-nodes-capabilities-discovery.html index 21f7c1d1..af912dbc 100644 --- a/approved/0059-nodes-capabilities-discovery.html +++ b/approved/0059-nodes-capabilities-discovery.html @@ -90,7 +90,7 @@ diff --git a/approved/0078-merkleized-metadata.html b/approved/0078-merkleized-metadata.html index 61555191..f780b8c4 100644 --- a/approved/0078-merkleized-metadata.html +++ b/approved/0078-merkleized-metadata.html @@ -90,7 +90,7 @@ diff --git a/approved/0084-general-transaction-extrinsic-format.html b/approved/0084-general-transaction-extrinsic-format.html index c776db9e..74cde627 100644 --- a/approved/0084-general-transaction-extrinsic-format.html +++ b/approved/0084-general-transaction-extrinsic-format.html @@ -90,7 +90,7 @@ diff --git a/approved/0091-dht-record-creation-time.html b/approved/0091-dht-record-creation-time.html index dcd5b68e..9d883c78 100644 --- a/approved/0091-dht-record-creation-time.html +++ b/approved/0091-dht-record-creation-time.html @@ -90,7 +90,7 @@ diff --git a/approved/0097-unbonding_queue.html b/approved/0097-unbonding_queue.html index a9ba0d5e..ea7d952a 100644 --- a/approved/0097-unbonding_queue.html +++ b/approved/0097-unbonding_queue.html @@ -90,7 +90,7 @@ diff --git a/approved/0099-transaction-extension-version.html b/approved/0099-transaction-extension-version.html index 9bc321f5..1e96d56d 100644 --- a/approved/0099-transaction-extension-version.html +++ b/approved/0099-transaction-extension-version.html @@ -90,7 +90,7 @@ diff --git a/approved/0101-xcm-transact-remove-max-weight-param.html b/approved/0101-xcm-transact-remove-max-weight-param.html index 97de51e1..1846934d 100644 --- a/approved/0101-xcm-transact-remove-max-weight-param.html +++ b/approved/0101-xcm-transact-remove-max-weight-param.html @@ -90,7 +90,7 @@ diff --git a/approved/0108-xcm-remove-testnet-ids.html b/approved/0108-xcm-remove-testnet-ids.html index 840cb646..62026fb0 100644 --- a/approved/0108-xcm-remove-testnet-ids.html +++ b/approved/0108-xcm-remove-testnet-ids.html @@ -90,7 +90,7 @@ diff --git a/index.html b/index.html index 84b9784a..90a66300 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index 84b9784a..90a66300 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/print.html b/print.html index 5a67ae1a..4b67a851 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -377,83 +377,6 @@
None.
- -Table of Contents
- -secp256r1_ecdsa_verify_prehashed
Host Function to verify NIST-P256
elliptic curve signaturesStart Date | 16 August 2024 |
Description | Host function to verify NIST-P256 elliptic curve signatures. |
Authors | Rodrigo Quelhas |
This RFC proposes a new host function, secp256r1_ecdsa_verify_prehashed
, for verifying NIST-P256
signatures. The function takes as input the message hash, r
and s
components of the signature, and the x
and y
coordinates of the public key. By providing this function, runtime authors can leverage a more efficient verification mechanism for "secp256r1" elliptic curve signatures, reducing computational costs and improving overall performance.
“secp256r1” elliptic curve is a standardized curve by NIST which has the same calculations by different input parameters with “secp256k1” elliptic curve. The cost of combined attacks and the security conditions are almost the same for both curves. Adding a host function can provide signature verifications using the “secp256r1” elliptic curve in the runtime and multi-faceted benefits can occur. One important factor is that this curve is widely used and supported in many modern devices such as Apple’s Secure Enclave, Webauthn, Android Keychain which proves the user adoption. Additionally, the introduction of this host function could enable valuable features in the account abstraction which allows more efficient and flexible management of accounts by transaction signs in mobile devices. -Most of the modern devices and applications rely on the “secp256r1” elliptic curve. The addition of this host function enables a more efficient verification of device native transaction signing mechanisms. For example:
-This RFC proposes a new host function for runtime authors to leverage a more efficient verification mechanism for "secp256r1" elliptic curve signatures.
-Proposed host function signature:
--#![allow(unused)] -fn main() { -fn ext_secp256r1_ecdsa_verify_prehashed_version_1( - sig: &[u8; 64], - msg: &[u8; 32], - pub_key: &[u8; 64], -) -> bool; -}
The host function MUST return true
if the signature is valid or false
otherwise.
N/A
-The changes are not directly affecting the protocol security, parachains are not enforced to use the host function.
-N/A
-The host function proposed in this RFC allows parachain runtime developers to use a more efficient verification mechanism for "secp256r1" elliptic curve signatures.
-Parachain teams will need to include this host function to upgrade.
-Table of Contents
A followup of the RFC-0014. This RFC proposes adding a new collective to the Polkadot Collectives Chain: The Unbrick Collective, as well as improvements in the mechanisms that will allow teams operating paras that had stopped producing blocks to be assisted, in order to restore the production of blocks of these paras.
-Since the initial launch of Polkadot parachains, there has been many incidients causing parachains to stop producing new blocks (therefore, being bricked) and many occurrences that required Polkadot governance to update the parachain head state/wasm. This can be due to many reasons range @@ -509,14 +432,14 @@
In consequence, the idea of a Unbrick Collective that can provide assistance to para teams when they brick and further protection against future halts is reasonable enough.
-The Unbrick Collective is defined as an unranked collective of members, not paid by the Polkadot Treasury. Its main goal is to serve as a point of contact and assistance for enacting the actions @@ -584,25 +507,25 @@
The ability to modify the Head State and/or the PVF of a para means a possibility to perform arbitrary modifications of it (i.e. take control the native parachain token or any bridged assets in the para).
This could introduce a new attack vector, and therefore, such great power needs to be handled carefully.
-The implementation of this RFC will be tested on testnets (Rococo and Westend) first.
An audit will be required to ensure the implementation doesn't introduce unwanted side effects.
There are no privacy related concerns.
-This RFC should not introduce any performance impact.
-This RFC should improve the experience for new and existing parachain teams, lowering the barrier to unbrick a stalled para.
-This RFC is fully compatible with existing interfaces.
-The protocol change introduces flexibility in the governance structure by enabling the referenda
track list to be modified dynamically at runtime. This is achieved by replacing static slices in
TracksInfo
with iterators, facilitating storage-based track management. As a result, governance
tracks can be modified or added based on real-time decisions and without requiring runtime upgrades.
Polkadot's governance system is designed to be adaptive and decentralized, but modifying the referenda tracks (which determine decision-making paths for proposals) has historically required runtime upgrades. This poses an operational challenge, delaying governance changes until an upgrade @@ -674,12 +597,12 @@
The protocol modification replaces the current static slice method used for storing referenda tracks with an iterator-based approach that allows tracks to be managed dynamically using chain storage. Governance participants can define and modify referenda tracks as needed, which are then accessed @@ -691,12 +614,12 @@
The most significant drawback is the increased complexity for developers managing track configurations via storage-based iterators, which require careful handling to avoid misuse or inefficiencies.
Additionally, this flexibility could introduce risks if track configurations are modified improperly during runtime, potentially leading to governance instabilities.
-To ensure security, the change must be tested in testnet environments first (Paseo, Westend), particularly in scenarios where multiple track changes happen concurrently. Potential vulnerabilities in governance adjustments must be addressed to prevent abuse.
@@ -704,22 +627,22 @@The proposal optimizes governance track management by avoiding the overhead of runtime upgrades, reducing downtime, and eliminating the need for full consensus on upgrades. However, there is a slight performance cost related to runtime access to storage-based iterators, though this is mitigated by the overall system efficiency gains.
-Developers and governance actors benefit from simplified governance processes but must account for the technical complexity of managing iterator-based track configurations.
Tools may need to be developed to help streamline track adjustments in runtime.
-The change is backward compatible with existing governance operations, and does not require developers to adjust how they interact with referenda tracks.
A migration is required to convert existing statically-defined tracks to dynamic storage-based configurations without disruption.
-This dynamic governance track approach builds on previous work around Polkadot's on-chain governance and leverages standard iterator patterns in Rust programming to improve runtime flexibility. Comparable solutions in other governance networks were examined, but this proposal uniquely tailors @@ -770,35 +693,35 @@
The code of a runtime is stored in its own state, and when performing a runtime upgrade, this code is replaced. The new runtime can contain runtime migrations that adapt the state to the state layout as defined by the runtime code. This runtime migration is executed when building the first block with the new runtime code. Anything that interacts with the runtime state uses the state layout as defined by the runtime code. So, when trying to load something from the state in the block that applied the runtime upgrade, it will use the new state layout but will decode the data from the non-migrated state. In the worst case, the data is incorrectly decoded, which may lead to crashes or halting of the chain.
This RFC proposes to store the new runtime code under a different storage key when applying a runtime upgrade. This way, all the off-chain logic can still load the old runtime code under the default storage key and decode the state correctly. The block producer is then required to use this new runtime code to build the next block. While building the next block, the runtime is executing the migrations and moves the new runtime code to the default runtime code location. So, the runtime code found under the default location is always the correct one to decode the state from which the runtime code was loaded.
-While the issue of having undecodable state only exists for the one block in which the runtime upgrade was applied, it still impacts anything that reads state data, like block explorers, UIs, nodes, etc. For block explorers, the issue mainly results in indexing invalid data and UIs may show invalid data to the user. For nodes, reading incorrect data may lead to a performance degradation of the network. There are also ways to prevent certain decoding issues from happening, but it requires that developers are aware of this issue and also requires introducing extra code, which could introduce further bugs down the line.
So, this RFC tries to solve these issues by fixing the underlying problem of having temporary undecodable state.
-The runtime code is stored under the special key :code
in the state. Nodes and other tooling read the runtime code under this storage key when they want to interact with the runtime for e.g., building/importing blocks or getting the metadata to read the state. To update the runtime code the runtime overwrites the value at :code
, and then from the next block on, the new runtime will be loaded.
This RFC proposes to first store the new runtime code under :pending_code
in the state for one block. When the next block is being built, the block builder first needs to check if :pending_code
is set, and if so, it needs to load the runtime from this storage key. While building the block the runtime will move :pending_code
to :code
to have the runtime code at the default location. Nodes importing the block will also need to load :pending_code
if it exists to ensure that the correct runtime code is used. By doing it this way, the runtime code found at :code
in the state of a block will always be able to decode the state.
Furthermore, this RFC proposes to introduce system_version: 3
. The system_version
was introduced in RFC42
. Version 3
would then enable the usage of :pending_code
when applying a runtime code upgrade. This way, the feature can be introduced first and enabled later when the majority of the nodes have upgraded.
Because the first block built with the new runtime code will move the runtime code from :pending_code
to :code
, the runtime code will need to be loaded. This means the runtime code will appear in the proof of validity of a parachain for the first block built with the new runtime code. Generally this is not a problem as the runtime code is also loaded by the parachain when setting the new runtime code.
There is still the possibility of having state that is not migrated even when following the proposal as presented by this RFC. The issue is that if the amount of data to be migrated is too big, not all of it can be migrated in one block, because either it takes more time than there is assigned for a block or parachains for example have a fixed budget for their proof of validity. To solve this issue there already exist multi-block migrations that can chunk the migration across multiple blocks. Consensus-critical data needs to be migrated in the first block to ensure that block production etc., can continue. For the other data being migrated by multi-block migrations the migrations could for example expose to the outside which keys are being migrated and should not be indexed until the migration is finished.
Testing should be straightforward and most of the existing testing should already be good enough. Extending with some checks that :pending_code
is moved to :code
.
The performance should not be impacted besides requiring loading the runtime code in the first block being built with the new runtime code.
-It only alters the way blocks are produced and imported after applying a runtime upgrade. This means that only nodes need to be adapted to the changes of this RFC.
-The change will require that the nodes are upgraded before the runtime starts using this feature. Otherwise they will fail to import the block build by :pending_code
.
For Polkadot/Kusama this means that also the parachain nodes need to be running with a relay chain node version that supports this new feature. Otherwise the parachains will stop producing/finalizing nodes as they can not sync the relay chain any more.
The issue initially reported a bug that led to this RFC. It also discusses multiple solutions for the problem.
None
@@ -844,10 +767,10 @@This RFC proposes the definition of version 5 extrinsics along with changes to the specification and encoding from version 4.
-RFC84
introduced the specification of General
transactions, a new type of extrinsic besides the Signed
and Unsigned
variants available previously in version 4. Additionally,
@@ -856,13 +779,13 @@
The introduction of General
transactions allows the authorization of any and all origins through
extensions. This means that, with the appropriate extension, General
transactions can replicate
@@ -935,32 +858,32 @@
The metadata will have to accommodate two distinct extrinsic format versions at a given point in time in order to provide the new functionality in a non-breaking way for users and tooling.
Although having to support multiple extrinsic versions in metadata involves extra work, the change is ultimately an improvement to metadata and the extra functionality may be useful in other future scenarios.
-There is no impact on testing, security or privacy.
-This change makes the authorization through signatures configurable by runtime devs in version 5
extrinsics, as opposed to version 4 where the signing payload algorithm and signatures were
hardcoded. This moves the responsibility of ensuring proper authentication through
TransactionExtension
to the runtime devs, but a sensible default which closely resembles the
present day behavior will be provided in VerifySignature
.
There is no performance impact.
-Tooling will have to adapt to be able to tell which authorization scheme is used by a particular
transaction by decoding the extension and checking which particular TransactionExtension
in the
pipeline is enabled to do the origin authorization. Previously, this was done by simply checking
whether the transaction is signed or unsigned, as there was only one method of authentication.
As long as extrinsic version 4 is still exposed in the metadata when version 5 will be introduced, the changes will not break existing infrastructure. This should give enough time for tooling to support version 5 and to remove version 4 in the future.
-This is a result of the work in Extrinsic Horizon and RFC99.
@@ -1006,9 +929,9 @@This RFC proposes a metadata format for XCM-identifiable assets (i.e., for fungible/non-fungible collections and non-fungible tokens) and a set of instructions to communicate it across chains.
-Currently, there is no way to communicate metadata of an asset (or an asset instance) via XCM.
The ability to query and modify the metadata is useful for two kinds of entities:
Besides metadata modification, the ability to read it is also valuable. On-chain logic can interpret the NFT metadata, i.e., the metadata could have not only the media meaning but also a utility function within a consensus system. Currently, such a way of using NFT metadata is possible only within one consensus system. This RFC proposes making it possible between different systems via XCM so different chains can fetch and analyze the asset metadata from other chains.
-Runtime users, Runtime devs, Cross-chain dApps, Wallets.
-The Asset Metadata is information bound to an asset class (fungible or NFT collection) or an asset instance (an NFT). The Asset Metadata could be represented differently on different chains (or in other consensus entities). However, to communicate metadata between consensus entities via XCM, we need a general format so that any consensus entity can make sense of such information.
@@ -1163,21 +1086,21 @@Regarding ergonomics, no drawbacks were noticed.
As for the user experience, it could discover new cross-chain use cases involving asset collections and NFTs, indicating a positive impact.
There are no security concerns except for the ReportMetadata
instruction, which implies that the source of the information must be trusted.
In terms of performance and privacy, there will be no changes.
-The implementations must honor the contract for the new instructions. Namely, if the instance
field has the value of AssetInstance::Undefined
, the metadata must relate to the asset collection but not to a non-fungible token inside it.
No significant impact.
-Introducing a standard metadata format and a way of communicating it is a valuable addition to the XCM format that potentially increases cross-chain interoperability without the need to form ad-hoc chain-to-chain integrations via Transact
.
This RFC proposes new functionality, so there are no compatibility issues.
-Should the MetadataMap
and MetadataKeys
be bounded, or is it enough to rely on the fact that every XCM message is itself bounded?
This proposal introduces XCQ (Cross Consensus Query), which aims to serve as an intermediary layer between different chain runtime implementations and tools/UIs, to provide a unified interface for cross-chain queries.
XCQ
abstracts away concrete implementations across chains and supports custom query computations.
Use cases benefiting from XCQ
include:
In Substrate, runtime APIs facilitate off-chain clients in reading the state of the consensus system.
However, different chains may expose different APIs for a similar query or have varying data types, such as doing custom transformations on direct data, or differing AccountId
types.
This diversity also extends to client-side, which may require custom computations over runtime APIs in various use cases.
Therefore, tools and UI developers often access storage directly and reimplement custom computations to convert data into user-friendly representations, leading to duplicated code between Rust runtime logic and UI JS/TS logic.
This duplication increases workload and potential for bugs.
Therefore, a system is needed to serve as an intermediary layer between concrete chain runtime implementations and tools/UIs, to provide a unified interface for cross-chain queries.
-The overall query pattern of XCQ consists of three components:
BadOrigin
DestinationUnsupported
Testing:
@@ -1648,14 +1571,14 @@It's a new functionality, which doesn't modify the existing implementations.
-The proposal facilitate the wallets and dApps developers. Developers no longer need to examine every concrete implementation to support conceptually similar operations across different chains. Additionally, they gain a more modular development experience through encapsulating custom computations over the exposed APIs in PolkaVM programs.
-The proposal defines new apis, which doesn't break compatibility with existing interfaces.
-There are several discussions related to the proposal, including:
The Gray Paper suggests a design for applying the same audit protocol from Polkadot's parachain validation service to ETH rollups: "Smart-contract state may be held in a coherent format on the JAM chain so long as any updates are made through the 15kb/core/sec work results, which would need to contain only the hashes of the altered contracts’ state roots." This proposal concretely outlines a JAM service to do this for two top non-Polkadot optimistic rollup platforms: OP Stack and ArbOS as well as, ostentatiously, Ethereum itself.
Optimistic rollups use centralized sequencers and have no forks, creating an illusion of fast finality while actually relying on delayed fraud proofs. Optimistic rollups are termed "optimistic" because they assume transactions are valid by default, requiring fraud proofs on Ethereum L1 if a dispute arises. Currently, ORUs store L2 data on ETH L1, using EIP-4844's blob transactions or similar DA alternatives, just long enough to allow for fraud proof submission. This approach, however, incurs a cost: a 7-day exit window to accommodate fraud proofs. JAM Service can reduce the dependence on this long exit window by validating L2 optimistic rollups as well as the L1.
-JAM is intended to host rollups rather than serve end users directly.
A JAM service to validate Optimistic Rollups and Ethereum will expand JAM's service scope to and enhance their appeal with JAM's high throughput capabilities for both DA and computational resources.
Increasing the total addressable market for rollups to include non-Polkadot rollups will increase CoreTime demand, making JAM attractive to both existing and new optimistic rollups with higher cross-validation.
@@ -1716,7 +1639,7 @@Instead of ETH, rollups would require DOT for CoreTime to secure their rollup. However, rollups are not locked into JAM and may freely enter and exit the JAM ecosystem since work packages do not need to start at genesis.
Different rollups may need to scale their core usage based on rollup activity. JAM's connectivity to CoreTime is expected to handle this effectively.
Currently, preimages are specified to use the Blake2b hash, while Ethereum rollup block hashes utilize Keccak256. This is an application level concern trivially solved by the preimage provider responding to preimage announcements by Blake2b hash instead of Keccak256.
-The described service requires expert review from security experts familiar with JAM, ELVES, and Ethereum.
The ELVES and JAM protocols are expected to undergo audit with the 1.0 ratification of JAM.
It is believed that the use of revm
is safe due to its extensive coverage of Ethereum State + Block tests, but this may require careful review.
This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.
-The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.
The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.
@@ -1954,7 +1877,7 @@Furthermore, the design SHOULD be implementable and deployable in a timely fashion; three months from the acceptance of this RFC should not be unreasonable.
-Primary stakeholder sets are:
Socialization:
The essensials of this proposal were presented at Polkadot Decoded 2023 Copenhagen on the Main Stage. A small amount of socialization at the Parachain Summit preceeded it and some substantial discussion followed it. Parity Ecosystem team is currently soliciting views from ecosystem teams who would be key stakeholders.
-Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime.
When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.
@@ -2395,11 +2318,11 @@No specific considerations.
Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.
While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.
-Regular testing through unit tests, integration tests, manual testnet tests, zombie-net tests and fuzzing SHOULD be conducted.
A regular security review SHOULD be conducted prior to deployment through a review by the Web3 Foundation economic research group.
Any final implementation MUST pass a professional external security audit.
@@ -2420,7 +2343,7 @@Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.
Table of Contents
@@ -2453,10 +2376,10 @@In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.
This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.
-The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.
Primary stakeholder sets are:
Socialization:
This content of this RFC was discussed in the Polkdot Fellows channel.
-The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact
instruction.
Future work may include these messages being introduced into the XCM standard.
Realistic Limits of the Usage
For request_revenue_info
, a successful request should be possible if when
is no less than the Relay-chain block number on arrival of the message less 100,000.
For assign_core
, a successful request should be possible if begin
is no less than the Relay-chain block number on arrival of the message plus 10 and workload
contains no more than 100 items.
-Performance, Ergonomics and Compatibility
+Performance, Ergonomics and Compatibility
No specific considerations.
-Testing, Security and Privacy
+Testing, Security and Privacy
Standard Polkadot testing and security auditing applies.
The proposal introduces no new privacy concerns.
Future Directions and Related Material
@@ -2561,7 +2484,7 @@ Drawbacks, Alternatives and Unknowns
None at present.
-Prior Art and References
+Prior Art and References
None.
Table of Contents
@@ -2607,13 +2530,13 @@ Summary
+Summary
As core functionality moves from the Relay Chain into system chains, so increases the reliance on
the liveness of these chains for the use of the network. It is not economically scalable, nor
necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a
mechanism -- part technical and part social -- for ensuring reliable collator sets that are
resilient to attemps to stop any subsytem of the Polkadot protocol.
-Motivation
+Motivation
In order to guarantee access to Polkadot's system, the collators on its system chains must propose
blocks (provide liveness) and allow all transactions to eventually be included. That is, some
collators may censor transactions, but there must exist one collator in the set who will include a
@@ -2649,12 +2572,12 @@
RequirementsCollators selected by governance SHOULD have a reasonable expectation that the Treasury will
reimburse their operating costs.
This protocol builds on the existing
Collator Selection pallet
and its notion of Invulnerables. Invulnerables are collators (identified by their AccountId
s) who
@@ -2690,27 +2613,27 @@
The primary drawback is a reliance on governance for continued treasury funding of infrastructure costs for Invulnerable collators.
-The vast majority of cases can be covered by unit testing. Integration test should ensure that the
Collator Selection UpdateOrigin
, which has permission to modify the Invulnerables and desired
number of Candidates, can handle updates over XCM from the system's governance location.
This proposal has very little impact on most users of Polkadot, and should improve the performance of system chains by reducing the number of missed blocks.
-As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. Appropriate benchmarking and tests should ensure that conservative limits are placed on the number of Invulnerables and Candidates.
-The primary group affected is Candidate collators, who, after implementation of this RFC, will need to compete in a bond-based election rather than a race to claim a Candidate spot.
-This RFC is compatible with the existing implementation and can be handled via upgrades and migration.
-The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.
This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.
-The maintenance of bootnodes has long been an annoyance for everyone.
When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.
@@ -2778,9 +2701,9 @@While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.
Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.
-This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.
-The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.
Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.
While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.
@@ -2817,10 +2740,10 @@The peer_id
and addrs
fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.
The values of the genesis_hash
and fork_id
fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.
Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.
This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.
@@ -2829,18 +2752,18 @@The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.
+The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.
Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.
Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key
rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key
corresponding to BabeApi_next_epoch
.
Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.
-Irrelevant.
-Irrelevant.
-None.
While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch
and BabeApi_nextEpoch
might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?
The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.
-How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.
-Polkadot DOT token holders.
-This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.
It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.
Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.
@@ -2914,13 +2837,13 @@Since the introduction of the Collectives parachain, many groups have expressed interest in forming new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into the Collectives parachain for each new collective. This RFC proposes a means for the network to ratify a new collective, thus instructing the Fellowship to instate it in the runtime.
-Many groups have expressed interest in representing collectives on-chain. Some of these include:
The group that wishes to operate an on-chain collective should publish the following information:
Collective removal may also come with other governance calls, for example voiding any scheduled Treasury spends that would fund the given collective.
-Passing a Root origin referendum is slow. However, given the network's investment (in terms of code maintenance and salaries) in a new collective, this is an appropriate step.
-No impacts.
-Generally all new collectives will be in the Collectives parachain. Thus, performance impacts should strictly be limited to this parachain and not affect others. As the majority of logic for collectives is generalized and reusable, we expect most collectives to be instances of similar subsets of modules. That is, new collectives should generally be compatible with UIs and other services that provide collective-related functionality, with little modifications to support new ones.
-The launch of the Technical Fellowship, see the initial forum post.
Introduces breaking changes to the Core
runtime API by letting Core::initialize_block
return an enum. The versions of Core
is bumped from 4 to 5.
The main feature that motivates this RFC are Multi-Block-Migrations (MBM); these make it possible to split a migration over multiple blocks.
Further it would be nice to not hinder the possibility of implementing a new hook poll
, that runs at the beginning of the block when there are no MBMs and has access to AllPalletsWithSystem
. This hook can then be used to replace the use of on_initialize
and on_finalize
for non-deadline critical logic.
In a similar fashion, it should not hinder the future addition of a System::PostInherents
callback that always runs after all inherents were applied.
Core::initialize_block
This runtime API function is changed from returning ()
to ExtrinsicInclusionMode
:
fn initialize_block(header: &<Block as BlockT>::Header)
@@ -3062,23 +2985,23 @@ 1. Multi-Block-Migrations: The runtime is being put into lock-down mode for the duration of the migration process by returning OnlyInherents
from initialize_block
. This ensures that no user provided transaction can interfere with the migration process. It is absolutely necessary to ensure this, otherwise a transaction could call into un-migrated storage and violate storage invariants.
2. poll
is possible by using apply_extrinsic
as entry-point and not hindered by this approach. It would not be possible to use a pallet inherent like System::last_inherent
to achieve this for two reasons: First is that pallets do not have access to AllPalletsWithSystem
which is required to invoke the poll
hook on all pallets. Second is that the runtime does currently not enforce an order of inherents.
3. System::PostInherents
can be done in the same manner as poll
.
-Drawbacks
+Drawbacks
The previous drawback of cementing the order of inherents has been addressed and removed by redesigning the approach. No further drawbacks have been identified thus far.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
The new logic of initialize_block
can be tested by checking that the block-builder will skip transactions when OnlyInherents
is returned.
Security: n/a
Privacy: n/a
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
The performance overhead is minimal in the sense that no clutter was added after fulfilling the
requirements. The only performance difference is that initialize_block
also returns an enum that needs to be passed through the WASM boundary. This should be negligible.
-Ergonomics
+Ergonomics
The new interface allows for more extensible runtime logic. In the future, this will be utilized for
multi-block-migrations which should be a huge ergonomic advantage for parachain developers.
-Compatibility
+Compatibility
The advice here is OPTIONAL and outside of the RFC. To not degrade
user experience, it is recommended to ensure that an updated node can still import historic blocks.
-Prior Art and References
+Prior Art and References
The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275). Related issues and merge
requests:
@@ -3132,14 +3055,14 @@ Authors Bryan Chen
-Summary
+Summary
This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action.
This is achieved by remove existing lock conditions and only lock a parachain when:
- A parachain manager explicitly lock the parachain
- OR a parachain block is produced successfully
-Motivation
+Motivation
The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain.
The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting.
The key scenarios this RFC seeks to improve are:
@@ -3158,12 +3081,12 @@ RequirementsA parachain SHOULD be locked when it successfully produced the first block.
A parachain can either be locked or unlocked3. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar
pallet:
Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains.
For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective.
It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently.
Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive.
Existing operational parachains will not be impacted.
-The implementation of this RFC will be tested on testnets (Rococo and Westend) first.
An audit maybe required to ensure the implementation does not introduce unwanted side effects.
There is no privacy related concerns.
-This RFC should not introduce any performance impact.
-This RFC should improve the developer experiences for new and existing parachain teams
-This RFC is fully compatibility with existing interfaces.
-Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.
-Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.
Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.
-Our PR has all details about our runtime and how we would move it into the fellowship repo.
Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets
It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.
@@ -3282,13 +3205,13 @@Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.
-No changes to the existing system are proposed. Only changes to how maintenance is organized.
-No changes
-Existing Encointer runtime repo
None identified
@@ -4212,11 +4135,11 @@The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains.
-Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing @@ -4228,13 +4151,13 @@
The following pallets and subsystems are good candidates to migrate from the Relay Chain:
These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular may require some optimizations to deal with constraints.
-Standard audit/review requirements apply. More powerful multi-chain integration test tools would be useful in developement.
-Describe the impact of the proposal on the exposed functionality of Polkadot.
-This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its primary resources are allocated to system performance.
-This proposal alters very little for coretime users (e.g. parachain developers). Application developers will need to interact with multiple chains, making ergonomic light client tools particularly important for application development.
For existing parachains that interact with these subsystems, they will need to configure their runtimes to recognize the new locations in the network.
-Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. Application developers will need to interact with multiple chains in the network.
-At the moment, we have system_version
field on RuntimeVersion
that derives which state version is used for the
Storage.
We have a use case where we want extrinsics root is derived using StateVersion::V1
. Without defining a new field
under RuntimeVersion
,
we would like to propose adding system_version
that can be used to derive both storage and extrinsic state version.
Since the extrinsic state version is always StateVersion::V0
, deriving extrinsic root requires full extrinsic data.
This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is
further explored in https://github.com/polkadot-fellows/RFCs/issues/19
In order to use project specific StateVersion for extrinsic roots, we proposed
an implementation that introduced
parameter to frame_system::Config
but that unfortunately did not feel correct.
@@ -4492,20 +4415,20 @@
There should be no drawbacks as it would replace state_version
with same behavior but documentation should be updated
so that chains know which system_version
to use.
AFAIK, should not have any impact on the security or privacy.
-These changes should be compatible for existing chains if they use state_version
value for system_verision
.
I do not believe there is any performance hit with this change.
-This does not break any exposed Apis.
-This change should not break any compatibility.
-We proposed introducing a similar change by introducing a
parameter to frame_system::Config
but did not feel that
is the correct way of introducing this change.
This RFC proposes a new host function for parachains, storage_proof_size
. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.
The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:
In addition, the current model does not account for multiple accesses to the same storage items. While these repetitive accesses will not increase storage-proof size, the runtime-side weight monitoring will account for them multiple times. Since the proof size is completely opaque to the runtime, we can not implement retroactive storage weight correction.
A solution must provide a way for the runtime to track the exact storage-proof size consumed on a per-extrinsic basis.
-This RFC proposes a new host function that exposes the storage-proof size to the runtime. As a result, runtimes can implement storage weight reclaiming mechanisms that improve block utilization.
This RFC proposes the following host function signature:
#![allow(unused)] @@ -4564,14 +4487,14 @@
Explanation
fn ext_storage_proof_size_version_1() -> u64; }
The host function MUST return an unsigned 64-bit integer value representing the current proof size. In block-execution and block-import contexts, this function MUST return the current size of the proof. To achieve this, parachain node implementors need to enable proof recording for block imports. In other contexts, this function MUST return 18446744073709551615 (u64::MAX), which represents disabled proof recording.
-Parachain nodes need to enable proof recording during block import to correctly implement the proposed host function. Benchmarking conducted with balance transfers has shown a performance reduction of around 0.6% when proof recording is enabled.
-The host function proposed in this RFC allows parachain runtime developers to keep track of the proof size. Typical usage patterns would be to keep track of the overall proof size or the difference between subsequent calls to the host function.
-Parachain teams will need to include this host function to upgrade.
-This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for creating an NFT collection, minting an individual NFT, and lowering its corresponding metadata and attribute deposits. The objective is to lower the barrier to entry for NFT creators, fostering a more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.
-The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2 DOT for metadata and attribute deposits) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub presents a significant financial barrier for many NFT creators. By lowering the deposit @@ -4647,7 +4570,7 @@
deposit
function, adjusted by correspoding pricing mechansim.Previous discussions have been held within the Polkadot Forum, with artists expressing their concerns about the deposit amounts.
-This RFC proposes a revision of the deposit constants in the configuration of the NFTs pallet on the Polkadot Asset Hub. The new deposit amounts would be determined by a standard deposit formula.
As of v1.1.1, the Collection Deposit is 10 DOT and the Item Deposit is 0.01 DOT (see @@ -4732,7 +4655,7 @@
Modifying deposit requirements necessitates a balanced assessment of the potential drawbacks. Highlighted below are cogent points extracted from the discourse on the Polkadot Forum conversation, @@ -4761,20 +4684,20 @@
As noted above, state bloat is a security concern. In the case of abuse, governance could adapt by
increasing deposit rates and/or using forceDestroy
on collections agreed to be spam.
The primary performance consideration stems from the potential for state bloat due to increased activity from lower deposit requirements. It's vital to monitor and manage this to avoid any negative impact on the chain's performance. Strategies for mitigating state bloat, including efficient data management and periodic reviews of storage requirements, will be essential.
-The proposed change aims to enhance the user experience for artists, traders, and utilizers of Kusama and Polkadot Asset Hubs, making Polkadot and Kusama more accessible and user-friendly.
-The change does not impact compatibility as a redeposit
function is already implemented.
If this RFC is accepted, there should not be any unresolved questions regarding how to adapt the @@ -4864,11 +4787,11 @@
Propose a way of permuting the availability chunk indices assigned to validators, in the context of recovering available data from systematic chunks, with the purpose of fairly distributing network bandwidth usage.
-Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3 validators during an entire session, when favouring availability recovery from systematic chunks.
@@ -4876,9 +4799,9 @@Relay chain node core developers.
-An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the resulting code. @@ -5032,7 +4955,7 @@
core_index
that used to be occupied by a candidate in some parts of the dispute protocol is
very complicated (See appendix A). This RFC assumes that availability-recovery processes initiated during
@@ -5042,23 +4965,23 @@ CandidateReceipt
Extensive testing will be conducted - both automated and manual. This proposal doesn't affect security or privacy.
-This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of CPU time in polkadot as we scale up the parachain block size and number of availability cores.
With this optimisation, preliminary performance results show that CPU time used for reed-solomon coding/decoding can be halved and total POV recovery time decrease by 80% for large POVs. See more here.
-Not applicable.
-This is a breaking change. See upgrade path section above. All validators and collators need to have upgraded their node versions before the feature will be enabled via a governance call.
-See comments on the tracking issue and the in-progress PR
This RFC proposes to changes the SessionKeys::generate_session_keys
runtime api interface. This runtime api is used by validator operators to
generate new session keys on a node. The public session keys are then registered manually on chain by the validator operator.
Before this RFC it was not possible by the on chain logic to ensure that the account setting the public session keys is also in
@@ -5146,7 +5069,7 @@
generate_session_keys
. Further this RFC proposes to change the return value of the generate_session_keys
function also to not only return the public session keys, but also the proof of ownership for the private session keys. The
validator operator will then need to send the public session keys and the proof together when registering new session keys on chain.
-When submitting the new public session keys to the on chain logic there doesn't exist any verification of possession of the private session keys. This means that users can basically register any kind of public session keys on chain. While the on chain logic ensures that there are no duplicate keys, someone could try to prevent others from registering new session keys by setting them first. While this wouldn't bring @@ -5154,13 +5077,13 @@
After this RFC this kind of attack would not be possible anymore, because the on chain logic can verify that the sending account is in ownership of the private session keys.
-We are first going to explain the proof
format being used:
#![allow(unused)] fn main() { @@ -5194,27 +5117,27 @@
Explanation already gets the
proof
passed asVec<u8>
. Thisproof
needs to be decoded to the actualProof
type as explained above. Theproof
and the SCALE encodedaccount_id
of the sender are used to verify the ownership of theSessionKeys
. -Drawbacks
+Drawbacks
Validator operators need to pass the their account id when rotating their session keys in a node. This will require updating some high level docs and making users familiar with the slightly changed ergonomics.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
Testing of the new changes only requires passing an appropriate
-owner
for the current testing context. The changes to the proof generation and verification got audited to ensure they are correct.Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
The session key generation is an offchain process and thus, doesn't influence the performance of the chain. Verifying the proof is done on chain as part of the transaction logic for setting the session keys. The verification of the proof is a signature verification number of individual session keys times. As setting the session keys is happening quite rarely, it should not influence the overall system performance.
-Ergonomics
+Ergonomics
The interfaces have been optimized to make it as easy as possible to generate the ownership proof.
-Compatibility
+Compatibility
Introduces a new version of the
SessionKeys
runtime api. Thus, nodes should be updated before a runtime is enacted that contains these changes otherwise they will fail to generate session keys. The RPC that exists around this runtime api needs to be updated to support passing the account id and for returning the ownership proof alongside the public session keys.UIs would need to be updated to support the new RPC and the changed on chain logic.
-Prior Art and References
+Prior Art and References
None.
Unresolved Questions
None.
@@ -5256,10 +5179,10 @@Summary +
Summary
The Fellowship Manifesto states that members should receive a monthly allowance on par with gross income in OECD countries. This RFC proposes concrete amounts.
-Motivation
+Motivation
One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and retain technical talent for the continued progress of the network.
In order for members to uphold their commitment to the network, they should receive support to @@ -5269,12 +5192,12 @@
Motivation
Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion are all explained in the Manifesto. This RFC is only to propose concrete values for allowances.
-Stakeholders
+Stakeholders
-
- Fellowship members
- Polkadot Treasury
Explanation
+Explanation
This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to the amount or asset used would only be on a single value, and all others would adjust relatively. A III Dan is someone whose contributions match the expectations of a full-time individual contributor. @@ -5334,19 +5257,19 @@
Projections
Updates
Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via RFC.
-Drawbacks
+Drawbacks
By not using DOT for payment, the protocol relies on the stability of other assets and the ability to acquire them. However, the asset of choice can be changed in the future.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
N/A.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
N/A
-Ergonomics
+Ergonomics
N/A
-Compatibility
+Compatibility
N/A
-Prior Art and References
+Prior Art and References
- The Polkadot Fellowship Manifesto
@@ -5387,11 +5310,11 @@Summary +
Summary
When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other.
Each notification on this substream currently consists in a SCALE-encoded
Vec<Transaction>
whereTransaction
is defined in the runtime.This RFC proposes to modify the format of the notification to become
-(Compact(1), Transaction)
. This maintains backwards compatibility, as this new format decodes as aVec
of length equal to 1.Motivation
+Motivation
There exists three motivations behind this change:
-
- @@ -5404,9 +5327,9 @@
MotivationIt makes the implementation way more straight-forward by not having to repeat code related to back-pressure. See explanations below.
Stakeholders
+Stakeholders
Low-level developers.
-Explanation
+Explanation
To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are:
concat( leb128(total-size-in-bytes-of-the-rest), @@ -5426,19 +5349,19 @@
Explanation This is equivalent to forcing the
Vec<Transaction>
to always have a length of 1, and I expect the Substrate implementation to simply modify the sending side to add afor
loop that sends one notification per item in theVec
.As explained in the motivation section, this allows extracting
scale(transaction)
items without having to know how to decode them.By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications.
-Drawbacks
+Drawbacks
This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the
Compact(1)
).An alternative could be to introduce a new version of the transactions notifications protocol that sends one
-Transaction
per notification, but this is significantly more complicated to implement and can always be done later in case theCompact(1)
is bothersome.Testing, Security, and Privacy
+Testing, Security, and Privacy
Irrelevant.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
Irrelevant.
-Ergonomics
+Ergonomics
Irrelevant.
-Compatibility
+Compatibility
The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format.
-Prior Art and References
+Prior Art and References
Irrelevant.
Unresolved Questions
None.
@@ -5482,20 +5405,20 @@Summary +
Summary
This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".
Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.
The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.
-Motivation
+Motivation
The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.
It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.
If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.
This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.
-Stakeholders
+Stakeholders
Low-level client developers. People interested in accessing the archive of the chain.
-Explanation
+Explanation
Reading RFC #8 first might help with comprehension, as this RFC is very similar.
Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.
Capabilities
@@ -5531,26 +5454,26 @@Drawbacks +
Drawbacks
None that I can see.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
The content of this section is basically the same as the one in RFC 8.
This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.
Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose
sha256(peer_id)
is closest to thekey
(described in the explanations section) to store the list of nodes that have specific capabilities. Furthermore, when a large number of providers are registered, only the providers closest to thekey
are kept, up to a certain implementation-defined limit.For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the
key
representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.
Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.
Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the
key
rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of thekey
corresponding toBabeApi_next_epoch
.Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.
-Ergonomics
+Ergonomics
Irrelevant.
-Compatibility
+Compatibility
Irrelevant.
-Prior Art and References
+Prior Art and References
Unknown.
Unresolved Questions
While it fundamentally doesn't change much to this RFC, using
@@ -5603,12 +5526,12 @@BabeApi_currentEpoch
andBabeApi_nextEpoch
might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?Summary +
Summary
To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format.
It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute.
This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails.
Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime.
-Motivation
+Motivation
Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way.
On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature.
The two main reasons why this is not possible today are:
@@ -5635,14 +5558,14 @@Red
- Chunks handling mechanism SHOULD support chunks being sent in any order without memory utilization overhead;
- Unused enum variants MUST be stripped (this has great impact on transmitted metadata size; examples: era enum, enum with all calls for call batching).
-Stakeholders
+Stakeholders
- Runtime implementors
- UI/wallet implementors
- Offline wallet implementors
The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem.
-Explanation
+Explanation
The FRAME metadata provides a wide range of information about a FRAME based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, runtime APIs, and type information about most of the types that are used in the runtime. For decoding extrinsics on an offline wallet, what is mainly required is type information. Most of the other information in the FRAME metadata is actually not required for decoding extrinsics and thus it can be removed. Therefore, the following is a proposal on a custom representation of the metadata and how this custom metadata is chunked, ensuring that only the needed chunks required for decoding a particular extrinsic are sent to the offline wallet. The necessary information to transform the FRAME metadata type information into the type information presented in this RFC will be provided. However, not every single detail on how to convert from FRAME metadata into the RFC type information is described.
First, the
MetadataDigest
is introduced. After that,ExtrinsicMetadata
is covered and finally the actual format of the type information. Then pruning of unrelated type information is covered and how to generate theTypeRef
s. In the latest step, merkle tree calculation is explained.Metadata digest
@@ -5913,18 +5836,18 @@Drawbacks +
Drawbacks
The chunking may not be the optimal case for every kind of offline wallet.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised.
Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash.
Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done.
Ergonomics & Compatibility
The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass
-0
for disabling the verification of the metadata root hash, it can be easily ignored.Prior Art and References
+Prior Art and References
RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well.
On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid.
Unresolved Questions
@@ -5967,20 +5890,20 @@
Authors George Pisaltu -Summary
+Summary
This RFC proposes a change to the extrinsic format to incorporate a new transaction type, the "general" transaction.
-Motivation
+Motivation
"General" transactions, a new type of transaction that this RFC aims to support, are transactions which obey the runtime's extensions and have according extension data yet do not have hard-coded signatures. They are first described in Extrinsic Horizon and supported in 3685. They enable users to authorize origins in new, more flexible ways (e.g. ZK proofs, mutations over pre-authenticated origins). As of now, all transactions are limited to the account signing model for origin authorization and any additional origin changes happen in extrinsic logic, which cannot leverage the validation process of extensions.
An example of a use case for such an extension would be sponsoring the transaction fee for some other user. A new extension would be put in place to verify that a part of the initial payload was signed by the author under who the extrinsic should run and change the origin, but the payment for the whole transaction should be handled under a sponsor's account. A POC for this can be found in 3712.
The new "general" transaction type would coexist with both current transaction types for a while and, therefore, the current number of supported transaction types, capped at 2, is insufficient. A new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit -
0
for unsigned,1
for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to4
for a long time.By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future.
-Stakeholders
+Stakeholders
-
- Runtime users
- Runtime devs
- Wallet devs
Explanation
+Explanation
An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using
T
make up the extrinsic type and bits represented usingV
make up the extrinsic version.Currently, the bit allocation within the leading encoded byte is
0bTVVV_VVVV
. In practice in the Polkadot ecosystem, the leading byte would be0bT000_0100
as the version has been equal to4
for a long time.This RFC proposes for the bit allocation to change to
@@ -5991,19 +5914,19 @@0bTTVV_VVVV
. As a result, the extrinsic format version will be bumped to5
and the extrinsic type bit representation would change as follows:Explanation
- 11 reserved Drawbacks
+Drawbacks
This change would reduce the maximum possible transaction version from the current
-127
to63
. In order to bypass the new, lower limit, the extrinsic format would have to change again.Testing, Security, and Privacy
+Testing, Security, and Privacy
There is no impact on testing, security or privacy.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal.
-Performance
+Performance
There is no performance impact.
-Ergonomics
+Ergonomics
The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version.
-Compatibility
+Compatibility
This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version.
-Prior Art and References
+Prior Art and References
The original design was originally proposed in the
TransactionExtension
PR, which is also the motivation behind this effort.Unresolved Questions
None.
@@ -6040,16 +5963,16 @@
Authors Alex Gheorghe (alexggh) -Summary
+Summary
Extend the DHT authority discovery records with a signed creation time, so that nodes can determine which record is newer and always decide to prefer the newer records to the old ones.
-Motivation
+Motivation
Currently, we use the Kademlia DHT for storing records regarding the p2p address of an authority discovery key, the problem is that if the nodes decide to change its PeerId/Network key it will publish a new record, however because of the distributed and replicated nature of the DHT there is no way to tell which record is newer so both old PeerId and the new PeerId will live in the network until the old one expires(36h), that creates all sort of problem and leads to the node changing its address not being properly connected for up to 36h.
After this RFC, nodes are extended to decide to keep the new record and propagate the new record to nodes that have the old record stored, so in the end all the nodes will converge faster to the new record(in the order of minutes, not 36h)
Implementation of the rfc: https://github.com/paritytech/polkadot-sdk/pull/3786.
Current issue without this enhacement: https://github.com/paritytech/polkadot-sdk/issues/3673
-Stakeholders
+Stakeholders
Polkadot node developers.
-Explanation
+Explanation
This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. You can find a link to the specification here.
In a nutshell, on a specific node the current authority-discovery protocol publishes Kademila DHT records at startup and periodically. The records contain the full address of the node for each authorithy key it owns. The node tries also to find the full address of all authorities in the network by querying the DHT and picking up the first record it finds for each of the authority id it found on chain.
@@ -6082,20 +6005,20 @@Explanation
Each time a node wants to resolve an authorithy ID it will issue a query with a certain redundancy factor, and from all the results it receives it will decide to pick only the newest record. Additionally, in order to speed up the time until all nodes have the newest record, nodes can optionaly implement a logic where they send the new record to nodes that answered with the older record.
-Drawbacks
+Drawbacks
In theory the new protocol creates a bit more traffic on the DHT network, because it waits for DHT records to be received from more than one node, while in the current implementation we just take the first record that we receive and cancel all in-flight requests to other peers. However, because the redundancy factor will be relatively small and this operation happens rarerly, every 10min, this cost is negligible.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
This RFC's implementation https://github.com/paritytech/polkadot-sdk/pull/3786 had been tested on various local test networks and versi.
With regard to security the creation time is wrapped inside SignedAuthorityRecord wo it will be signed with the authority id key, so there is no way for other malicious nodes to manipulate this field without the received node observing.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
Irrelevant.
-Performance
+Performance
Irrelevant.
-Ergonomics
+Ergonomics
Irrelevant.
-Compatibility
+Compatibility
The changes are backwards compatible with the existing protocol, so nodes with both the old protocol and newer protocol can exist in the network, this is achieved by the fact that we use protobuf for serializing and deserializing the records, so new fields will be ignore when deserializing with the older protocol and vice-versa when deserializing an old record with the new protocol the new field will be
-None
and the new code accepts this record as being valid.Prior Art and References
+Prior Art and References
The enhancements have been inspired by the algorithm specified in here
Unresolved Questions
N/A
@@ -6145,23 +6068,23 @@Summary +
Summary
This RFC proposes a flexible unbonding mechanism for tokens that are locked from staking on the Relay Chain (DOT/KSM), aiming to enhance user convenience without compromising system security.
Locking tokens for staking ensures that Polkadot is able to slash tokens backing misbehaving validators. With changing the locking period, we still need to make sure that Polkadot can slash enough tokens to deter misbehaviour. This means that not all tokens can be unbonded immediately, however we can still allow some tokens to be unbonded quickly.
The new mechanism leads to a signficantly reduced unbonding time on average, by queuing up new unbonding requests and scaling their unbonding duration relative to the size of the queue. New requests are executed with a minimum of 2 days, when the queue is comparatively empty, to the conventional 28 days, if the sum of requests (in terms of stake) exceed some threshold. In scenarios between these two bounds, the unbonding duration scales proportionately. The new mechanism will never be worse than the current fixed 28 days.
In this document we also present an empirical analysis by retrospectively fitting the proposed mechanism to the historic unbonding timeline and show that the average unbonding duration would drastically reduce, while still being sensitive to large unbonding events. Additionally, we discuss implications for UI, UX, and conviction voting.
Note: Our proposition solely focuses on the locks imposed from staking. Other locks, such as governance, remain unchanged. Also, this mechanism should not be confused with the already existing feature of FastUnstake, which lets users unstake tokens immediately that have not received rewards for 28 days or longer.
As an initial step to gauge its effectiveness and stability, it is recommended to implement and test this model on Kusama before considering its integration into Polkadot, with appropriate adjustments to the parameters. In the following, however, we limit our discussion to Polkadot.
-Motivation
+Motivation
Polkadot has one of the longest unbonding periods among all Proof-of-Stake protocols, because security is the most important goal. Staking on Polkadot is still attractive compared to other protocols because of its above-average staking APY. However the long unbonding period harms usability and deters potential participants that want to contribute to the security of the network.
The current length of the unbonding period imposes significant costs for any entity that even wants to perform basic tasks such as a reorganization / consolidation of their stashes, or updating their private key infrastructure. It also limits participation of users that have a large preference for liquidity.
The combination of long unbonding periods and high returns has lead to the proliferation of liquid staking, where parachains or centralised exchanges offer users their staked tokens before the 28 days unbonding period is over either in original DOT/KSM form or derivative tokens. Liquid staking is harmless if few tokens are involved but it could result in many validators being selected by a few entities if a large fraction of DOTs were involved. This may lead to centralization (see here for more discussion on threats of liquid staking) and an opportunity for attacks.
The new mechanism greatly increases the competitiveness of Polkadot, while maintaining sufficient security.
-Stakeholders
+Stakeholders
-
- Every DOT/KSM token holder
Explanation
+Explanation
Before diving into the details of how to implement the unbonding queue, we give readers context about why Polkadot has a 28-day unbonding period in the first place. The reason for it is to prevent long-range attacks (LRA) that becomes theoretically possible if more than 1/3 of validators collude. In essence, a LRA describes the inability of users, who disconnect from the consensus at time t0 and reconnects later, to realize that validators which were legitimate at a certain time, say t0 but dropped out in the meantime, are not to be trusted anymore. That means, for example, a user syncing the state could be fooled by trusting validators that fell outside the active set of validators after t0, and are building a competitive and malicious chain (fork).
LRAs of longer than 28 days are mitigated by the use of trusted checkpoints, which are assumed to be no more than 28 days old. A new node that syncs Polkadot will start at the checkpoint and look for proofs of finality of later blocks, signed by 2/3 of the validators. In an LRA fork, some of the validator sets may be different but only if 2/3 of some validator set in the last 28 days signed something incorrect.
If we detect an LRA of no more than 28 days with the current unbonding period, then we should be able to detect misbehaviour from over 1/3 of validators whose nominators are still bonded. The stake backing these validators is considerable fraction of the total stake (empirically it is 0.287 or so). If we allowed more than this stake to unbond, without checking who it was backing, then the LRA attack might be free of cost for an attacker. The proposed mechansim allows up to half this stake to unbond within 28 days. This halves the amount of tokens that can be slashed, but this is still very high in absolute terms. For example, at the time of writing (19.06.2024) this would translate to around 120 millions DOTs.
@@ -6219,23 +6142,23 @@Convictio
Potential Extension
In addition to a simple queue, we could add a market component that lets users always unbond from staking at the minimum possible waiting time)(==
LOWER_BOUND
, e.g., 2 days), by paying a variable fee. To achieve this, it is reasonable to split the total unbonding capacity into two chunks, with the first capacity for the simple queue and the remaining capacity for the fee-based unbonding. By doing so, we allow users to choose whether they want the quickest unbond and paying a dynamic fee or join the simple queue. Setting a capacity restriction for both queues enables us to guarantee a predictable unbonding time in the simple queue, while allowing users with the respective willingness to pay to get out even earlier. The fees are dynamically adjusted and are proportional to the unbonding stake (and thereby expressed in a percentage of the requested unbonding stake). In contrast to a unified queue, this prevents the issue that users paying a fee jump in front of other users not paying a fee, pushing their unbonding time back (which would be bad for UX). The revenue generated could be burned.This extension and further specifications are left out of this RFC, because it adds further complexity and the empirical analysis above suggests that average unbonding times will already be close the
-LOWER_BOUND
, making a more complex design unnecessary. We advise to first implement the discussed mechanism and assess after some experience whether an extension is desirable.Drawbacks
+Drawbacks
-
- Lower security for LRAs: Without a doubt, the theoretical security against LRAs decreases. But, as we argue, the attack is still costly enough to deter attacks and the attack is sufficiently theoretical. Here, the benefits outweigh the costs.
- Griefing attacks: A large holder could pretend to unbond a large amount of their tokens to prevent other users to exit the network earlier. This would, however be costly due to the fact that the holder loses out on staking rewards. The larger the impact on the queue, the higher the costs. In any case it must be noted that the
UPPER_BOUND
is still 28 days, which means that nominators are never left with a longer unbonding period than currently. There is not enough gain for the attacker to endure this cost.- Challenge for Custodians and Liquid Staking Providers: Changing the unbonding time, especially making it flexible, requires entities that offer staking derivatives to rethink and rework their products.
Testing, Security, and Privacy
+Testing, Security, and Privacy
NA
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
NA
-Performance
+Performance
The authors cannot see any potential impact on performance.
-Ergonomics
+Ergonomics
The authors cannot see any potential impact on ergonomics for developers. We discussed potential impact on UX/UI for users above.
-Compatibility
+Compatibility
The authors cannot see any potential impact on compatibility. This should be assessed by the technical fellows.
-Prior Art and References
+Prior Art and References
- Ethereum proposed a similar solution
- Alistair did some initial write-up
@@ -6272,20 +6195,20 @@
Authors Bastian Köcher -Summary
+Summary
This RFC proposes a change to the extrinsic format to include a transaction extension version.
-Motivation
+Motivation
The extrinsic format supports to be extended with transaction extensions. These transaction extensions are runtime specific and can be different per chain. Each transaction extension can add data to the extrinsic itself or extend the signed payload. This means that adding a transaction extension is breaking the chain specific extrinsic format. A recent example was the introduction of the
-CheckMetadatHash
to Polkadot and all its system chains. As the extension was adding one byte to the extrinsic, it broke a lot of tooling. By introducing an extra version for the transaction extensions it will be possible to introduce changes to these transaction extensions while still being backwards compatible. Based on the version of the transaction extensions, each chain runtime could decode the extrinsic correctly and also create the correct signed payload.Stakeholders
+Stakeholders
-
- Runtime users
- Runtime devs
- Wallet devs
Explanation
+Explanation
RFC84 introduced the extrinsic format
5
. The idea is to piggyback onto this change of the extrinsic format to add the extra version for the transaction extensions. If required, this could also come as extrinsic format6
, but5
is not yet deployed anywhere.The extrinsic format supports the following types of transactions:
@@ -6301,21 +6224,21 @@Explanation
The
Version
being a SCALE encodedu8
representing the version of the transaction extensions.In the chain runtime the version can be used to determine which set of transaction extensions should be used to decode and to validate the transaction.
-Drawbacks
+Drawbacks
This adds one byte more to each signed transaction.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
There is no impact on testing, security or privacy.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
This will ensure that changes to the transactions extensions can be done in a backwards compatible way.
-Performance
+Performance
There is no performance impact.
-Ergonomics
+Ergonomics
Runtime developers need to take care of the versioning and ensure to bump as required, so that there are no compatibility breaking changes without a bump of the version. It will also add a little bit more code in the runtime to decode these old versions, but this should be neglectable.
-Compatibility
+Compatibility
When introduced together with extrinsic format version
-5
from RFC84, it can be implemented in a backwards compatible way. So, transactions can still be send using the old extrinsic format and decoded by the runtime.Prior Art and References
+Prior Art and References
None.
Unresolved Questions
None.
@@ -6352,10 +6275,10 @@
Authors Adrian Catangiu -Summary
+Summary
The
Transact
XCM instruction currently forces the user to set a specific maximum weight allowed to the inner call and then also pay for that much weight regardless of how much the call actually needs in practice.This RFC proposes improving the usability of
-Transact
by removing that parameter and instead get and charge the actual weight of the inner call from its dispatch info on the remote chain.Motivation
+Motivation
The UX of using
Transact
is poor because of having to guess/estimate therequire_weight_at_most
weight used by the inner call on the target.We've seen multiple
Transact
on-chain failures caused by guessing wrong values for thisrequire_weight_at_most
even though the rest of the XCM program would have worked.In practice, this parameter only adds UX overhead with no real practical value. Use cases fall in one of two categories:
@@ -6368,36 +6291,36 @@MotivationWe've had multiple OpenGov
root/whitelisted_caller
proposals initiated by core-devs completely or partially fail because of incorrect configuration ofrequire_weight_at_most
parameter. This is a strong indication that the instruction is hard to use. -Stakeholders
+Stakeholders
-
- Runtime Users,
- Runtime Devs,
- Wallets,
- dApps,
Explanation
+Explanation
The proposed enhancement is simple: remove
require_weight_at_most
parameter from the instruction:- Transact { origin_kind: OriginKind, require_weight_at_most: Weight, call: DoubleEncoded<Call> }, + Transact { origin_kind: OriginKind, call: DoubleEncoded<Call> },
The XCVM implementation shall no longer use
-require_weight_at_most
for weighing. Instead, it shall weigh the Transact instruction by decoding and weighing the innercall
.Drawbacks
+Drawbacks
No drawbacks, existing scenarios work as before, while this also allows new/easier flows.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
Currently, an XCVM implementation can weigh a message just by looking at the decoded instructions without decoding the Transact's call, but assuming
require_weight_at_most
weight for it. With the new version it has to decode the inner call to know its actual weight.But this does not actually change the security considerations, as can be seen below.
With the new
Transact
the weighing happens after decoding the innercall
. The entirety of the XCM program containing thisTransact
needs to be either covered by enough bought weight using aBuyExecution
, or the origin has to be allowed to do free execution.The security considerations around how much can someone execute for free are the same for both this new version and the old. In both cases, an "attacker" can do the XCM decoding (including Transact inner
call
s) for free by adding a large enoughBuyExecution
without actually having the funds available.In both cases, decoding is done for free, but in both cases execution fails early on
-BuyExecution
.Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
No performance change.
-Ergonomics
+Ergonomics
Ergonomics are slightly improved by simplifying
-Transact
API.Compatibility
+Compatibility
Compatible with previous XCM programs.
-Prior Art and References
+Prior Art and References
None.
Unresolved Questions
None.
@@ -6434,32 +6357,32 @@Summary +
Summary
This RFC aims to remove the
-NetworkId
s ofWestend
andRococo
, arguing that testnets shouldn't go in the language.Motivation
+Motivation
We've already seen the plans to phase out Rococo and Paseo has appeared. Instead of constantly changing the testnets included in the language, we should favor specifying them via their genesis hash, using
-NetworkId::ByGenesis
.Stakeholders
+Stakeholders
-
- Runtime devs
- Wallets
- dApps
Explanation
+Explanation
Remove
-Westend
andRococo
from the includedNetworkId
s in the language.Drawbacks
+Drawbacks
This RFC will make it less convenient to specify a testnet, but not by a large amount.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
None.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
None.
-Ergonomics
+Ergonomics
It will very slightly reduce the ergonomics of testnet developers but improve the stability of the language.
-Compatibility
+Compatibility
-
NetworkId::Rococo
andNetworkId::Westend
can just useNetworkId::ByGenesis
, as can other testnets.Prior Art and References
+Prior Art and References
A previous attempt to add
NetworkId::Paseo
: https://github.com/polkadot-fellows/xcm-format/pull/58.Unresolved Questions
None.
@@ -6509,16 +6432,16 @@Summary +
Summary
An off-chain approximation protocol should assign rewards based upon the approvals and availability work done by validators.
All validators track which approval votes they actually use, reporting the aggregate, after which an on-chain median computation gives a good approximation under byzantine assumptions. Approval checkers report aggregate information about which availability chunks they use too, but in availability we need a tit-for-tat game to enforce honesty, because approval committees could often bias results thanks to their small size.
-Motivation
+Motivation
We want all polkadot subsystems be profitable for validataors, because otherwise operators might profit from running modified code. In particular, almost all rewards in Kusama/Polkadot should come from work done securing parachains, primarily approval checking, but also backing, availability, and support of XCMP.
Among these task, our highest priorities must be approval checks, which ensure soundness, and sending availability chunks to approval checkers. We prove backers must be paid strictly less than approval checkers.
At present though, validators' rewards have relatively little relationship to validators operating costs, in terms of bandwidth and CPU time. Worse, polkadot's scaling makes us particular vulnerable "no-shows" caused by validators skipping their approval checks.
We're particularly concernned about hardware specks impact upon the number of parachain cores. We've requested relatively low spec machines so far, only four physical CPU cores, although some run even lower specs like only two physical CPU cores. Alone, rewards cannot fix our low speced validator problem, but rewards and outreach together should far more impact than either alone.
In future, we'll further increase validator spec requirements, which directly improve polkadot's throughput, and which repeats this dynamic of purging underspeced nodes, except outreach becomes more important because de facto too many slow validators can "out vote" the faster ones
-Stakeholders
+Stakeholders
We alter the validators rewards protocol, but with negligable impact upon rewards for honest validators who comply with hardware and bandwidth recommendations.
We shall still reward participation in relay chain concensus of course, which de facto means block production but not finality, but these current reward levels shall wind up greatly reduced. Any validators who manipulate block rewards now could lose rewards here, simply because of rewards being shifted from block production to availability, but this sounds desirable.
We've discussed roughly this rewards protocol in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF and https://github.com/paritytech/polkadot-sdk/issues/1811 as well as related topics like https://github.com/paritytech/polkadot-sdk/issues/5122
@@ -6604,7 +6527,7 @@Rew
We distribute rewards on-chain using
approval_usages_medians
andreweighted_total_used_downloads
. Approval checkers could later change from who they download chunks usingmy_missing_uploads
.Strategies
In theory, validators could adopt whatever strategy they like to penalize validators who stiff them on availability redistribution rewards, except they should not stiff back, only choose other availability providers. We discuss one good strategy below, but initially this could go unimplemented.
-Explanation
+Explanation
Backing
Polkadot's efficency creates subtle liveness concerns: Anytime one node cannot perform one of its approval checks then Polkadot loses in expectation 3.25 approval checks, or 0.10833 parablocks. This makes back pressure essential.
We cannot throttle approval checks securely either, so reactive off-chain back pressure only makes sense during or before the backing phase. In other words, if nodes feel overworked themselves, or perhaps beleive others to be, then they should drop backing checks, never approval checks. It follows backing work must be rewarded less well and less reliably than approvals, as otherwise validators could benefit from behavior that harms the network.
@@ -6655,7 +6578,7 @@We discuss approvals being considered by the tit-for-tat in earlier drafts. An adversary who successfuly manipulates the rewards median votes would've alraedy violated polkadot's security assumptions though, which requires a hard fork and correcting the dot allocation. Incorrect report wrong
+approval_usages
remain interesting statistics though.Adversarial validators could manipulates their availability votes though, even without being a supermajority. If they still download honestly, then this costs them more rewards than they earn. We do not prevent validators from preferentially obtaining their pieces from their friends though. We should analyze, or at least observe, the long-term consequences.
A priori, whale nominator's validators could stiff validators but then rotate their validators quickly enough so that they never suffered being skipped back. We discuss several possible solution, and their difficulties, under "Rob's nominator-wise skipping" in https://hackmd.io/@rgbPIkIdTwSICPuAq67Jbw/S1fHcvXSF but overall less seems like more here. Also frequent validator rotation could be penalized elsewhere.
-Performance, Ergonomics, and Compatibility
+Performance, Ergonomics, and Compatibility
We operate off-chain except for final rewards votes and median tallies. We expect lower overhead rewards protocols would lack information, thereby admitting easier cheating.
Initially, we designed the ELVES approval gadget to allow on-chain operation, in part for rewards computation, but doing so looks expensive. Also, on-chain rewards computaiton remains only an approximation too, but could even be biased more easily than our off-chain protocol presented here.
@@ -6663,7 +6586,7 @@Prior Art and References
+Prior Art and References
None
Unresolved Questions
Provide specific questions to discuss and address before the RFC is voted on by the Fellowship. This should include, for example, alternatives to aspects of the proposed design where the appropriate trade-off to make is unclear.
@@ -6699,16 +6622,16 @@Summary
Summary
Update the runtime-host interface to no longer make use of a host-side allocator.
-Motivation
+Motivation
The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.
The API of many host functions consists in allocating a buffer. For example, when calling
ext_hashing_twox_256_version_1
, the host allocates a 32 bytes buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to callext_allocator_free_version_1
on this pointer in order to free the buffer.Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of
ext_hashing_twox_256_version_1
, it would be more efficient to instead write the output hash to a buffer that was allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst case scenario simply consists in decreasing a number, and in the best case scenario is free. Doing so would save many Wasm memory reads and writes by the allocator, and would save a function call toext_allocator_free_version_1
.Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go twice through the runtime <-> host boundary (once for allocating and once for freeing). Moving the allocator to the runtime side, while it would increase the size of the runtime, would be a good idea. But before the host-side allocator can be deprecated, all the host functions that make use of it need to be updated to not use it.
-Stakeholders
+Stakeholders
No attempt was made at convincing stakeholders.
-Explanation
+Explanation
New host functions
This section contains a list of new host functions to introduce.
(func $ext_storage_read_version_2 @@ -6911,7 +6834,7 @@
Other changes
ext_allocator_free_version_1
ext_offchain_network_state_version_1
This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.
The API of these new functions was heavily inspired by API used by the C programming language.
@@ -6972,10 +6895,10 @@This RFC proposes a dynamic pricing model for the sale of Bulk Coretime on the Polkadot UC. The proposed model updates the regular price of cores for each sale period, by taking into account the number of cores sold in the previous sale, as well as a limit of cores and a target number of cores sold. It ensures a minimum price and limits price growth to a maximum price increase factor, while also giving govenance control over the steepness of the price change curve. It allows governance to address challenges arising from changing market conditions and should offer predictable and controlled price adjustments.
Accompanying visualizations are provided at [1].
-RFC-1 proposes periodic Bulk Coretime Sales as a mechanism to sell continouos regions of blockspace (suggested to be 4 weeks in length). A number of Blockspace Regions (compare RFC-1 & RFC-3) are provided for sale to the Broker-Chain each period and shall be sold in a way that provides value-capture for the Polkadot network. The exact pricing mechanism is out of scope for RFC-1 and shall be provided by this RFC.
A dynamic pricing model is needed. A limited number of Regions are offered for sale each period. The model needs to find the price for a period based on supply and demand of the previous period.
The model shall give Coretime consumers predictability about upcoming price developments and confidence that Polkadot governance can adapt the pricing model to changing market conditions.
@@ -6987,7 +6910,7 @@The primary stakeholders of this RFC are:
The dynamic pricing model sets the new price based on supply and demand in the previous period. The model is a function of the number of Regions sold, piecewise-defined by two power functions.
None at present.
-This pricing model is based on the requirements from the basic linear solution proposed in RFC-1, which is a simple dynamic pricing model and only used as proof. The present model adds additional considerations to make the model more adaptable under real conditions.
This RFC, if accepted, shall be implemented in conjunction with RFC-1.
@@ -7135,9 +7058,9 @@Improve the networking messages that query storage items from the remote, in order to reduce the bandwidth usage and number of round trips of light clients.
-Clients on the Polkadot peer-to-peer network can be divided into two categories: full nodes and light clients. So-called full nodes are nodes that store the content of the chain locally on their disk, while light clients are nodes that don't. In order to access for example the balance of an account, a full node can do a disk read, while a light client needs to send a network message to a full node and wait for the full node to reply with the desired value. This reply is in the form of a Merkle proof, which makes it possible for the light client to verify the exactness of the value.
Unfortunately, this network protocol is suffering from some issues:
Once Polkadot and Kusama will have transitioned to state_version = 1
, which modifies the format of the trie entries, it will be possible to generate Merkle proofs that contain only the hashes of values in the storage. Thanks to this, it is already possible to prove the existence of a key without sending its entire value (only its hash), or to prove that a value has changed or not between two blocks (by sending just their hashes).
Thus, the only reason why aforementioned issues exist is because the existing networking messages don't give the possibility for the querier to query this. This is what this proposal aims at fixing.
This is the continuation of https://github.com/w3f/PPPs/pull/10, which itself is the continuation of https://github.com/w3f/PPPs/pull/5.
-The protobuf schema of the networking protocol can be found here: https://github.com/paritytech/substrate/blob/5b6519a7ff4a2d3cc424d78bc4830688f3b184c0/client/network/light/src/schema/light.v1.proto
The proposal is to modify this protocol in this way:
@@ -11,6 +11,7 @@ message Request {
@@ -7207,22 +7130,22 @@ Explanation
Also note that child tries aren't considered as descendants of the main trie when it comes to the includeDescendants
flag. In other words, if the request concerns the main trie, no content coming from child tries is ever sent back.
This protocol keeps the same maximum response size limit as currently exists (16 MiB). It is not possible for the querier to know in advance whether its query will lead to a reply that exceeds the maximum size. If the reply is too large, the replier should send back only a limited number (but at least one) of requested items in the proof. The querier should then send additional requests for the rest of the items. A response containing none of the requested items is invalid.
The server is allowed to silently discard some keys of the request if it judges that the number of requested keys is too high. This is in line with the fact that the server might truncate the response.
-Drawbacks
+Drawbacks
This proposal doesn't handle one specific situation: what if a proof containing a single specific item would exceed the response size limit? For example, if the response size limit was 1 MiB, querying the runtime code (which is typically 1.0 to 1.5 MiB) would be impossible as it's impossible to generate a proof less than 1 MiB. The response size limit is currently 16 MiB, meaning that no single storage item must exceed 16 MiB.
Unfortunately, because it's impossible to verify a Merkle proof before having received it entirely, parsing the proof in a streaming way is also not possible.
A way to solve this issue would be to Merkle-ize large storage items, so that a proof could include only a portion of a large storage item. Since this would require a change to the trie format, it is not realistically feasible in a short time frame.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
The main security consideration concerns the size of replies and the resources necessary to generate them. It is for example easily possible to ask for all keys and values of the chain, which would take a very long time to generate. Since responses to this networking protocol have a maximum size, the replier should truncate proofs that would lead to the response being too large. Note that it is already possible to send a query that would lead to a very large reply with the existing network protocol. The only thing that this proposal changes is that it would make it less complicated to perform such an attack.
Implementers of the replier side should be careful to detect early on when a reply would exceed the maximum reply size, rather than inconditionally generate a reply, as this could take a very large amount of CPU, disk I/O, and memory. Existing implementations might currently be accidentally protected from such an attack thanks to the fact that requests have a maximum size, and thus that the list of keys in the query was bounded. After this proposal, this accidental protection would no longer exist.
Malicious server nodes might truncate Merkle proofs even when they don't strictly need to, and it is not possible for the client to (easily) detect this situation. However, malicious server nodes can already do undesirable things such as throttle down their upload bandwidth or simply not respond. There is no need to handle unnecessarily truncated Merkle proofs any differently than a server simply not answering the request.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
It is unclear to the author of the RFC what the performance implications are. Servers are supposed to have limits to the amount of resources they use to respond to requests, and as such the worst that can happen is that light client requests become a bit slower than they currently are.
-Ergonomics
+Ergonomics
Irrelevant.
-Compatibility
+Compatibility
The prior networking protocol is maintained for now. The older version of this protocol could get removed in a long time.
-Prior Art and References
+Prior Art and References
None. This RFC is a clean-up of an existing mechanism.
Unresolved Questions
None
@@ -7256,9 +7179,9 @@ Summary
+Summary
This document is a proposal for restructuring the bulk markets in the Polkadot UC's coretime allocation system to improve efficiency and fairness. The proposal suggests separating the BULK_PERIOD
into MARKET_PERIOD
and RENEWAL_PERIOD
, allowing for a market-driven price discovery through a clearing price Dutch auction during the MARKET_PERIOD
followed by renewal offers at the MARKET_PRICE
during the RENEWAL_PERIOD
. The new system ensures synchronicity between renewal and market prices, fairness among all current tenants, and efficient price discovery, while preserving price caps to provide security for current tenants. It seeks to start a discussion about the possibility of long-term leases.
-Motivation
+Motivation
While the initial RFC-1 has provided a robust framework for Coretime allocation within the Polkadot UC, this proposal builds upon its strengths and uses many provided building blocks to address some areas that could be further improved.
In particular, this proposal introduces the following changes:
@@ -7278,14 +7201,14 @@ Motivation
The premise of this proposal is to reduce complexity by introducing a common price (that develops releative to capacity consumption of Polkadot UC), while still allowing for market forces to add efficiency. Longterm lease owners still receive priority IF they can pay (close to) the market price. This prevents a situation where the renewal price significantly diverges from renewal prices which allows for core captures. While maximum price increase certainty might seem contradictory to efficient price discovery, the proposed model aims to balance these elements, utilizing market forces to determine the price and allocate cores effectively within certain bounds. It must be stated, that potential price increases remain predictable (in the worst-case) but could be higher than in the originally proposed design. The argument remains, however, that we need to allow market forces to affect all prices for an efficient Coretime pricing and allocation.
Ultimately, this the framework proposed here adheres to all requirements stated in RFC-1.
-Stakeholders
+Stakeholders
Primary stakeholder sets are:
- Protocol researchers and developers, largely represented by the Polkadot Fellowship and Parity Technologies' Engineering division.
- Polkadot Parachain teams both present and future, and their users.
- Polkadot DOT token holders.
-Explanation
+Explanation
Bulk Markets
The BULK_PERIOD
has been restructured into two primary segments: the MARKET_PERIOD
and RENEWAL_PERIOD
, along with an auxiliary SETTLEMENT_PERIOD
. This latter period doesn't necessitate any actions from the coretime system chain, but it facilitates a more efficient allocation of coretime in secondary markets. A significant departure from the original proposal lies in the timing of renewals, which now occur post-market phase. This adjustment aims to harmonize renewal prices with their market counterparts, ensuring a more consistent and equitable pricing model.
Market Period (14 days)
@@ -7332,9 +7255,9 @@ Drawbacks
+Drawbacks
There are trade-offs that arise from this proposal, compared to the initial model. The most notable one is that here, I prioritize requirement 6 over requirement 2. The price, in the very "worst-case" (meaning a huge explosion in demand for coretime) could lead to a much larger increase of prices in Coretime. From an economic perspective, this (rare edgecase) would also mean that we'd vastly underprice Coretime in the original model, leading to highly inefficient allocations.
-Prior Art and References
+Prior Art and References
This RFC builds extensively on the available ideas put forward in RFC-1.
Additionally, I want to express a special thanks to Samuel Haefner and Shahar Dobzinski for fruitful discussions and helping me structure my thoughts.
Unresolved Questions
@@ -7369,16 +7292,16 @@ Authors Gabriel Facco de Arruda
-Summary
+Summary
This RFC proposes changes that enable the use of absolute locations in AccountId derivations, which allows protocols built using XCM to have static account derivations in any runtime, regardless of its position in the family hierarchy.
-Motivation
+Motivation
These changes would allow protocol builders to leverage absolute locations to maintain the exact same derived account address across all networks in the ecosystem, thus enhancing user experience.
One such protocol, that is the original motivation for this proposal, is InvArch's Saturn Multisig, which gives users a unifying multisig and DAO experience across all XCM connected chains.
-Stakeholders
+Stakeholders
- Ecosystem developers
-Explanation
+Explanation
This proposal aims to make it possible to derive accounts for absolute locations, enabling protocols that require the ability to maintain the same derived account in any runtime. This is done by deriving accounts from the hash of described absolute locations, which are static across different destinations.
The same location can be represented in relative form and absolute form like so:
#![allow(unused)]
@@ -7435,20 +7358,20 @@ WithCom
DescribeFamily
The DescribeFamily
location descriptor is part of the HashedDescription
MultiLocation hashing system and exists to describe locations in an easy format for encoding and hashing, so that an AccountId can be derived from this MultiLocation.
This implementation contains a match statement that does not match against absolute locations, so changes to it involve matching against absolute locations and providing appropriate descriptions for hashing.
-Drawbacks
+Drawbacks
No drawbacks have been identified with this proposal.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
Tests can be done using simple unit tests, as this is not a change to XCM itself but rather to types defined in xcm-builder
.
Security considerations should be taken with the implementation to make sure no unwanted behavior is introduced.
This proposal does not introduce any privacy considerations.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
Depending on the final implementation, this proposal should not introduce much overhead to performance.
-Ergonomics
+Ergonomics
The ergonomics of this proposal depend on the final implementation details.
-Compatibility
+Compatibility
Backwards compatibility should remain unchanged, although that depend on the final implementation.
-Prior Art and References
+Prior Art and References
DescirbeFamily
type: https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/xcm/xcm-builder/src/location_conversion.rs#L122
WithComputedOrigin
type: https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/xcm/xcm-builder/src/barriers.rs#L153
@@ -7485,7 +7408,7 @@ ChaosDAO
-Summary
+Summary
This RFC proposes to make modifications to voting power delegations as part of the Conviction Voting pallet. The changes being proposed include:
- Allow a Delegator to vote independently of their Delegate if they so desire.
@@ -7493,7 +7416,7 @@ Summary
- Make a change so that when a delegate votes abstain their delegated votes also vote abstain.
- Allow a Delegator to delegate/ undelegate their votes for all tracks with a single call.
-Motivation
+Motivation
It has become clear since the launch of OpenGov that there are a few common tropes which pop up time and time again:
- The frequency of referenda is often too high for network participants to have sufficient time to review, comprehend, and ultimately vote on each individual referendum. This means that these network participants end up being inactive in on-chain governance.
@@ -7501,13 +7424,13 @@ MotivationDelegating votes for all tracks currently requires long batched calls which result in high fees for the Delegator - resulting in a reluctance from many to delegate their votes.
We believe (based on feedback from token holders with a larger stake in the network) that if there were some changes made to delegation mechanics, these larger stake holders would be more likely to delegate their voting power to active network participants – thus greatly increasing the support turnout.
-Stakeholders
+Stakeholders
The primary stakeholders of this RFC are:
- The Polkadot Technical Fellowship who will have to research and implement the technical aspects of this RFC
- DOT token holders in general
-Explanation
+Explanation
This RFC proposes to make 4 changes to the convictionVoting pallet logic in order to improve the user experience of those delegating their voting power to another account.
-
@@ -7523,17 +7446,17 @@
Explanation
Allow a Delegator to delegate/ undelegate their votes for all tracks with a single call - in order to delegate votes across all tracks, a user must batch 15 calls - resulting in high costs for delegation. A single call for delegate_all
/ undelegate_all
would reduce the complexity and therefore costs of delegations considerably for prospective Delegators.
-Drawbacks
+Drawbacks
We do not foresee any drawbacks by implementing these changes. If anything we believe that this should help to increase overall voter turnout (via the means of delegation) which we see as a net positive.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
We feel that the Polkadot Technical Fellowship would be the most competent collective to identify the testing requirements for the ideas presented in this RFC.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
This change may add extra chain storage requirements on Polkadot, especially with respect to nested delegations.
Ergonomics & Compatibility
The change to add nested delegations may affect governance interfaces such as Nova Wallet who will have to apply changes to their indexers to support nested delegations. It may also affect the Polkadot Delegation Dashboard as well as Polkassembly & SubSquare.
We want to highlight the importance for ecosystem builders to create a mechanism for indexers and wallets to be able to understand that changes have occurred such as increasing the pallet version, etc.
-Prior Art and References
+Prior Art and References
N/A
Unresolved Questions
N/A
@@ -7579,9 +7502,9 @@ Summary
+Summary
This RFC proposes a new model for a sustainable on-demand parachain registration, involving a smaller initial deposit and periodic rent payments. The new model considers that on-demand chains may be unregistered and later re-registered. The proposed solution also ensures a quick startup for on-demand chains on Polkadot in such cases.
-Motivation
+Motivation
With the support of on-demand parachains on Polkadot, there is a need to explore a new, more cost-effective model for registering validation code. In the current model, the parachain manager is responsible for reserving a unique ParaId
and covering the cost of storing the validation code of the parachain. These costs can escalate, particularly if the validation code is large. We need a better, sustainable model for registering on-demand parachains on Polkadot to help smaller teams deploy more easily.
This RFC suggests a new payment model to create a more financially viable approach to on-demand parachain registration. In this model, a lower initial deposit is required, followed by recurring payments upon parachain registration.
This new model will coexist with the existing one-time deposit payment model, offering teams seeking to deploy on-demand parachains on Polkadot a more cost-effective alternative.
@@ -7595,11 +7518,11 @@ RequirementsThe solution MUST allow anyone to pay the rent.
- The solution MUST prevent the removal of validation code if it could still be required for disputes or approval checking.
-Stakeholders
+Stakeholders
- Future Polkadot on-demand Parachains
-Explanation
+Explanation
This RFC proposes a set of changes that will enable the new rent based approach to registering and storing validation code on-chain.
The new model, compared to the current one, will require periodic rent payments. The parachain won't be pruned automatically if the rent is not paid, but by permitting anyone to prune the parachain and rewarding the caller, there will be an incentive for the removal of the validation code.
On-demand parachains should still be able to utilize the current one-time payment model. However, given the size of the deposit required, it's highly likely that most on-demand parachains will opt for the new rent-based model.
@@ -7706,23 +7629,23 @@ }
To enable parachain re-registration, we should introduce a new extrinsic in the paras-registrar
pallet that allows this. The logic of this extrinsic will be same as regular registration, with the distinction that it can be called by anyone, and the required deposit will be smaller since it only has to cover for the storage of the validation code.
-Drawbacks
+Drawbacks
This RFC does not alter the process of reserving a ParaId
, and therefore, it does not propose reducing it, even though such a reduction could be beneficial.
Even though this RFC doesn't delve into the specifics of the configuration values for parachain registration but rather focuses on the mechanism, configuring it carelessly could lead to potential problems.
Since the validation code hash and head data are not removed when the parachain is pruned but only when the deregister
extrinsic is called, the T::DataDepositPerByte
must be set to a higher value to create a strong enough incentive for removing it from the state.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
The implementation of this RFC will be tested on Rococo first.
Proper research should be conducted on setting the configuration values of the new system since these values can have great impact on the network.
An audit is required to ensure the implementation's correctness.
The proposal introduces no new privacy concerns.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
This RFC should not introduce any performance impact.
-Ergonomics
+Ergonomics
This RFC does not affect the current parachains, nor the parachains that intend to use the one-time payment model for parachain registration.
-Compatibility
+Compatibility
This RFC does not break compatibility.
-Prior Art and References
+Prior Art and References
Prior discussion on this topic: https://github.com/paritytech/polkadot-sdk/issues/1796
Unresolved Questions
None at this time.
@@ -7760,16 +7683,16 @@ Summary
+Summary
Rather than enforce a limit to the total memory consumption on the client side by loading the value at :heappages
, enforce that limit on the runtime side.
-Motivation
+Motivation
From the early days of Substrate up until recently, the runtime was present in two forms: the wasm runtime (wasm bytecode passed through an interpreter) and the native runtime (native code directly run by the client).
Since the wasm runtime has a lower amount of available memory (4 GiB maximum) compared to the native runtime, and in order to ensure sure that the wasm and native runtimes always produce the same outcome, it was necessary to clamp the amount of memory available to both runtimes to the same value.
In order to achieve this, a special storage key (a "well-known" key) :heappages
was introduced and represents the number of "wasm pages" (one page equals 64kiB) of memory that are available to the memory allocator of the runtimes. If this storage key is absent, it defaults to 2048, which is 128 MiB.
The native runtime has since then been disappeared, but the concept of "heap pages" still exists. This RFC proposes a simplification to the design of Polkadot by removing the concept of "heap pages" as is currently known, and proposes alternative ways to achieve the goal of limiting the amount of memory available.
-Stakeholders
+Stakeholders
Client implementers and low-level runtime developers.
-Explanation
+Explanation
This RFC proposes the following changes to the client:
- The client no longer considers
:heappages
as special.
@@ -7795,23 +7718,23 @@ Explanation
Each parachain can choose the option that they prefer, but the author of this RFC strongly suggests either option C or B.
-Drawbacks
+Drawbacks
In case of path A, there is one situation where the behaviour pre-RFC is not equivalent to the one post-RFC: when a host function that performs an allocation (for example ext_storage_get
) is called, without this RFC this allocation might fail due to reaching the maximum heap pages, while after this RFC this will always succeed.
This is most likely not a problem, as storage values aren't supposed to be larger than a few megabytes at the very maximum.
In the unfortunate event where the runtime runs out of memory, path B would make it more difficult to relax the memory limit, as we would need to re-upload the entire Wasm, compared to updating only :heappages
in path A or before this RFC.
In the case where the runtime runs out of memory only in the specific event where the Wasm runtime is modified, this could brick the chain. However, this situation is no different than the thousands of other ways that a bug in the runtime can brick a chain, and there's no reason to be particularily worried about this situation in particular.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
This RFC would reduce the chance of a consensus issue between clients.
The :heappages
are a rather obscure feature, and it is not clear what happens in some corner cases such as the value being too large (error? clamp?) or malformed. This RFC would completely erase these questions.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
In case of path A, it is unclear how performances would be affected. Path A consists in moving client-side operations to the runtime without changing these operations, and as such performance differences are expected to be minimal. Overall, we're talking about one addition/subtraction per malloc and per free, so this is more than likely completely negligible.
In case of path B and C, the performance gain would be a net positive, as this RFC strictly removes things.
-Ergonomics
+Ergonomics
This RFC would isolate the client and runtime more from each other, making it a bit easier to reason about the client or the runtime in isolation.
-Compatibility
+Compatibility
Not a breaking change. The runtime-side changes can be applied immediately (without even having to wait for changes in the client), then as soon as the runtime is updated, the client can be updated without any transition period. One can even consider updating the client before the runtime, as it corresponds to path C.
-Prior Art and References
+Prior Art and References
None.
Unresolved Questions
None.
@@ -7853,7 +7776,7 @@ Summary
+Summary
This RFC proposes adding a trivial governance track on Kusama to facilitate X (formerly known as Twitter) posts on the @kusamanetwork account. The technical aspect
of implementing this in the runtime is very inconsequential and straight-forward, though it might get more technical if the Fellowship wants to regulate this track
with a non-existent permission set. If this is implemented it would need to be followed up with:
@@ -7861,7 +7784,7 @@ Summary
- the establishment of specifications for proposing X posts via this track, and
- the development of tools/processes to ensure that the content contained in referenda enacted in this track would be automatically posted on X.
-Motivation
+Motivation
The overall motivation for this RFC is to decentralize the management of the Kusama brand/communication channel to KSM holders. This is necessary in my opinion primarily
because of the inactivity of the account in recent history, with posts spanning weeks or months apart. I am currently unaware of who/what entity manages the Kusama
X account, but if they are affiliated with Parity or W3F this proposed solution could also offload some of the legal ramifications of making (or not making)
@@ -7871,11 +7794,11 @@
Motivation
Finally, this RFC is the epitome of experimentation that Kusama is ideal for. This proposal may spark newfound excitement for Kusama and help us realize Kusama's potential
for pushing boundaries and trying new unconventional ideas.
-Stakeholders
+Stakeholders
This idea has not been formalized by any individual (or group of) KSM holder(s). To my knowledge the socialization of this idea is contained
entirely in my recent X post here, but it is possible that an idea like this one has been discussed in
other places. It appears to me that the ecosystem would welcome a change like this which is why I am taking action to formalize the discussion.
-Explanation
+Explanation
The implementation of this idea can be broken down into 3 primary phases:
Phase 1 - Track configurations
First, we begin with this RFC to ensure all feedback can be discussed and implemented in the proposal. After the Fellowship and the community come to a reasonable
@@ -7928,7 +7851,7 @@
Drawbacks
+Drawbacks
The main drawback to this change is that it requires a lot of off-chain coordination. It's easy enough to include the track on Kusama but it's a totally different
challenge to make it function as intended. The tools need to be built and the auth tokens need to be managed. It would certainly add an administrative burden to whoever
manages the X account since they would either need to run the tools themselves or manage auth tokens.
@@ -7940,22 +7863,22 @@ Drawbacks
agency to manage posts. It wouldn't be decentralized but it would probably be more effective in terms of creating good content.
Finally, this solution is merely pseudo-decentralization since the X account manager would still have ultimate control of the account. It's decentralized insofar as
the auth tokens are given to people actually running the tools; a house of cards is required to facilitate X posts via this track. Not ideal.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
There's major precedent for configuring tracks on openGov given the amount of power tracks have, so it shouldn't be hard to come up with a sound configuration.
That's why I recommend restricting permissions of this track to remarks and batches of remarks, or something equally inconsequential.
Building the tools for this implementation is really straight-forward and could be audited by Fellowship members, and the community at large, on Github.
The largest security concern would be the management of Kusama's X account's auth tokens. We would need to ensure that they aren't compromised.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
If a track on Kusama promises users that compliant referenda enacted therein would be posted on Kusama's X account, users would expect that track to perform as promised.
If the house of cards tumbles down and a compliant referendum doesn't actually get anything posted, users might think that Kusama is broken or unreliable. This
could be damaging to Kusama's image and cause people to question the soundness of other features on Kusama.
As mentioned in the drawbacks, the performance of this feature would depend on off-chain coordinations. We can reduce the administrative burden of these coordinations
by funding third parties with the Treasury to deal with it, but then we're relying on trusting these parties.
-Ergonomics
+Ergonomics
By adding a new track to Kusama, governance platforms like Polkassembly or Nova Wallet would need to include it on their applications. This shouldn't be too
much of a burden or overhead since they've already built the infrastructure for other openGov tracks.
-Compatibility
+Compatibility
This change wouldn't break any compatibility as far as I know.
References
One reference to a similar feature requiring on-chain/off-chain coordination would be the Kappa-Sigma-Mu Society. Nothing on-chain necessarily enforces the rules
@@ -8000,11 +7923,11 @@
Summary
+Summary
The current size of the decision deposit on some tracks is too high for many proposers. As a result, those needing to use it have to find someone else willing to put up the deposit for them - and a number of legitimate attempts to use the root track have timed out. This track would provide a more affordable (though slower) route for these holders to use the root track.
-Motivation
+Motivation
There have been recent attempts to use the Kusama root track which have timed out with no decision deposit placed. Usually, these referenda have been related to parachain registration related issues.
-Explanation
+Explanation
Propose to address this by adding a new referendum track [22] Referendum Deposit which can place the decision deposit on another referendum. This would require the following changes:
- [Referenda Pallet] Modify the
placeDecisionDesposit
function to additionally allow it to be called by root, with root call bypassing the requirements for a deposit payment.
@@ -8028,19 +7951,19 @@ Drawbacks
+Drawbacks
This track would provide a route to starting a root referendum with a much-reduced slashable deposit. This might be undesirable but, assuming the decision deposit cost for this track is still high enough, slashing would still act as a disincentive.
An alternative to this might be to reduce the decision deposit size some of the more expensive tracks. However, part of the purpose of the high deposit - at least on the root track - is to prevent spamming the limited queue with junk referenda.
-Testing, Security, and Privacy
+Testing, Security, and Privacy
Will need additional tests case for the modified pallet and runtime. No security or privacy issues.
-Performance, Ergonomics, and Compatibility
-Performance
+Performance, Ergonomics, and Compatibility
+Performance
No significant performance impact.
-Ergonomics
+Ergonomics
Only changes related to adding the track. Existing functionality is unchanged.
-Compatibility
+Compatibility
No compatibility issues.
-Prior Art and References
+Prior Art and References
- Recent discussion / referendum for an alternative way to address this issue: Kusama Referendum 340 - Funding a Decision Deposit Sponsor
@@ -8089,7 +8012,7 @@ Summary
+Summary
A pallet to facilitate enhanced multisig accounts. The main enhancement is that we store a multisig account in the state with related info (signers, threshold,..etc). The module affords enhanced control over administrative operations such as adding/removing signers, changing the threshold, account deletion, canceling an existing proposal. Each signer can approve/reject a proposal while still exists. The proposal is not intended for migrating or getting rid of existing multisig. It's to allow both options to coexist.
For the rest of the RFC We use the following terms:
@@ -8097,7 +8020,7 @@ Summary
Stateful Multisig
to refer to the proposed pallet.
Stateless Multisig
to refer to the current multisig pallet in polkadot-sdk.
-Motivation
+Motivation
Problem
Entities in the Polkadot ecosystem need to have a way to manage their funds and other operations in a secure and efficient way. Multisig accounts are a common way to achieve this. Entities by definition change over time, members of the entity may change, threshold requirements may change, and the multisig account may need to be deleted. For even more enhanced hierarchical control, the multisig account may need to be controlled by other multisig accounts.
Current native solutions for multisig operations are less optimal, performance-wise (as we'll explain later in the RFC), and lack fine-grained control over the multisig account.
@@ -8139,12 +8062,12 @@ Use Cases
and much more...
-Stakeholders
+Stakeholders
- Polkadot holders
- Polkadot developers
-Explanation
+Explanation
I've created the stateful multisig pallet during my studies in Polkadot Blockchain Academy under supervision from @shawntabrizi and @ank4n. After that, I've enhanced it to be fully functional and this is a draft PR#3300 in polkadot-sdk. I'll list all the details and design decisions in the following sections. Note that the PR is not 1-1 exactly to the current RFC as the RFC is a more polished version of the PR after updating based on the feedback and discussions.
Let's start with a sequence diagram to illustrate the main operations of the Stateful Multisig.
@@ -8557,14 +8480,14 @@ In case threshold is lower than the number of approvers then the proposal is still valid.
- In case threshold is higher than the number of approvers then we catch it during execute proposal and error.
Standard audit/review requirements apply.
-Doing back of the envelop calculation to proof that the stateful multisig is more efficient than the stateless multisig given it's smaller footprint size on blocks.
Quick review over the extrinsics for both as it affects the block size:
Stateless Multisig: @@ -8628,11 +8551,11 @@
So even though the stateful multisig has a larger state size, it's still more efficient in terms of block size and total footprint on the blockchain.
-The Stateful Multisig will have better ergonomics for managing multisig accounts for both developers and end-users.
-This RFC is compatible with the existing implementation and can be handled via upgrades and migration. It's not intended to replace the existing multisig pallet.
-multisig pallet in polkadot-sdk
This proposes to increase the maximum length of PGP Fingerprint values from a 20 bytes/chars limit to a 40 bytes/chars limit.
-Pretty Good Privacy (PGP) Fingerprints are shorter versions of their corresponding Public Key that may be printed on a business card.
They may be used by someone to validate the correct corresponding Public Key.
@@ -8704,7 +8627,7 @@The maximum length of identity PGP Fingerprint values should be increased from the current 20 bytes/chars limit at least a 40 bytes/chars limit to support PGP Fingerprints and GPG Fingerprints.
-If a user tries to setting an on-chain identity by creating an extrinsic using Polkadot.js with identity
> setIdentity(info)
, then if they try to provide their 40 character long PGP Fingerprint or GPG Fingerprint, which is longer than the maximum length of 20 bytes/chars [u8;20]
, then they will encounter this error:
createType(Call):: Call: failed decoding identity.setIdentity:: Struct: failed on args: {...}:: Struct: failed on pgpFingerprint: Option<[u8;20]>:: Expected input with 20 bytes (160 bits), found 40 bytes
Increasing maximum length of identity PGP Fingerprint values from the current 20 bytes/chars limit to at least a 40 bytes/chars limit would overcome these errors and support PGP Fingerprints and GPG Fingerprints, satisfying the solution requirements.
-No drawbacks have been identified.
-Implementations would be tested for adherance by checking that 40 bytes/chars PGP Fingerprints are supported.
No effect on security or privacy has been identified than already exists.
No implementation pitfalls have been identified.
-It would be an optimization, since the associated exposed interfaces to developers and end-users could start being used.
To minimize additional overhead the proposal suggests a 40 bytes/chars limit since that would at least provide support for PGP Fingerprints, satisfying the solution requirements.
-No potential ergonomic optimizations have been identified.
-Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.
-No prior articles or references.
No further questions at this stage.
@@ -8782,10 +8705,10 @@This proposes to require a slashable deposit in the broker pallet when initially purchasing or renewing Bulk Coretime or Instantaneous Coretime cores.
Additionally, it proposes to record a reputational status based on the behavior of the purchaser, as it relates to their use of Kusama Coretime cores that they purchase, and to possibly reserve a proportion of the cores for prospective purchasers that have an on-chain identity.
-There are sales of Kusama Coretime cores that are scheduled to occur later this month by Coretime Marketplace Lastic.xyz initially in limited quantities, and potentially also by RegionX in future that is subject to their Polkadot referendum #582. This poses a risk in that some Kusama Coretime core purchasers may buy Kusama Coretime cores when they have no intention of actually placing a workload on them or leasing them out, which would prevent those that wish to purchase and actually use Kusama Coretime cores from being able to use any at cores at all.
The slashable deposit if set too high, may result in an economic impact, where less Kusama Coretime core sales are purchased.
-Lack of a slashable deposit in the Broker pallet is a security concern, since it exposes Kusama Coretime sales to potential abuse.
Reserving a proportion of Kusama Coretime sales cores for those with on-chain identities should not be to the exclusion of accounts that wish to remain anonymous or cause cores to be wasted unnecessarily. As such, if cores that are reserved for on-chain identities remain unsold then they should be released to anonymous accounts that are on a waiting list.
No implementation pitfalls have been identified.
-It should improve performance as it reduces the potential for state bloat since there is less risk of undesirable Kusama Coretime sales activity that would be apparent with no requirement for a slashable deposit or there being no reputational risk to purchasers that waste or misuse Kusama Coretime cores.
The solution proposes to minimize the risk of some Kusama Coretime cores not even being used or leased to perform any tasks at all.
It will be important to monitor and manage the slashable deposits, purchaser reputations, and utilization of the proportion of cores that are reserved for accounts with an on-chain identity.
-The mechanism for setting a slashable deposit amount, should avoid undue complexity for users.
-Updates to Polkadot.js Apps, API and its documentation and those referring to it may be required.
-No prior articles.
This RFC proposes the addition of a secondary market feature to either the broker pallet or as a separate pallet maintained by Lastic, enabling users to list and purchase regions. This includes creating, purchasing, and removing listings, as well as emitting relevant events and handling associated errors.
-Currently, the broker pallet lacks functionality for a secondary market, which limits users' ability to freely trade regions. This RFC aims to introduce a secure and straightforward mechanism for users to list regions they own for sale and allow other users to purchase these regions.
While integrating this functionality directly into the broker pallet is one option, another viable approach is to implement it as a separate pallet maintained by Lastic. This separate pallet would have access to the broker pallet and add minimal functionality necessary to support the secondary market.
Adding smart contracts to the Coretime chain could also address this need; however, this process is expected to be lengthy and complex. We cannot afford to wait for this extended timeline to enable basic secondary market functionality. By proposing either integration into the broker pallet or the creation of a dedicated pallet, we can quickly enhance the flexibility and utility of the broker pallet, making it more user-friendly and valuable.
-Primary stakeholders include:
This RFC introduces the following key features:
The main drawback of adding the additional complexity directly to the broker pallet is the potential increase in maintenance overhead. Therefore, we propose adding additional functionality as a separate pallet on the Coretime chain. To take the pressure off from implementing these features, implementation along with unit tests would be taken care of by Lastic (Aurora Makovac, Philip Lucsok).
There are potential risks of security vulnerabilities in the new market functionalities, such as unauthorized region transfers or incorrect balance adjustments. Therefore, extensive security measures would have to be implemented.
-This RFC proposes the integration of smart contracts on the Coretime chain to enhance flexibility and enable complex decentralized applications, including secondary market functionalities.
-Currently, the Coretime chain lacks the capability to support smart contracts, which limits the range of decentralized applications that can be developed and deployed. By enabling smart contracts, the Coretime chain can facilitate more sophisticated functionalities such as automated region trading, dynamic pricing mechanisms, and other decentralized applications that require programmable logic. This will enhance the utility of the Coretime chain, attract more developers, and create more opportunities for innovation.
Additionally, while there is a proposal (#885) to allow EVM-compatible contracts on Polkadot’s Asset Hub, the implementation of smart contracts directly on the Coretime chain will provide synchronous interactions and avoid the complexities of asynchronous operations via XCM.
-Primary stakeholders include:
This RFC introduces the following key components:
There are several drawbacks to consider:
Change the upgrade process of a parachain runtime upgrade to become an off-chain process with regards to the relay chain. Upgrades are still contained in parachain blocks, but will no longer need to end up in relay chain blocks nor in relay chain state.
-Having parachain runtime upgrades go through the relay chain has always been seen as a scalability concern. Due to optimizations in statement distribution and asynchronous backing it became less crucial and got @@ -9195,13 +9118,13 @@
The issues with on-chain runtime upgrades are:
The major drawback of this solution is the same as any solution the moves work off-chain, it adds complexity to the node. E.g. nodes needing the PVF, need to store them separately, together with their own pruning strategy as well.
-Implementations adhering to this RFC, will respond to PVF requests with the actual PVF, if they have it. Requesters will persist received PVFs on disk for as long as they are replaced by a new one. Implementations must not be lazy @@ -9349,8 +9272,8 @@
This proposal lightens the load on the relay chain and is thus in general beneficial for the performance of the network, this is achieved by the following:
@@ -9365,7 +9288,7 @@End users are only affected by better performance and more stable block times.
Parachains will need to implement the introduced request/response protocol and
adapt to the new signalling mechanism via an UMP
message, instead of sending
@@ -9376,7 +9299,7 @@
We will continue to support the old mechanism for code upgrades for a while, but will start to impose stricter limits over time, with the number of registered parachains going up. With those limits in place parachains not migrating to the @@ -9397,7 +9320,7 @@
Off-chain runtime upgrades have been discussed before, the architecture described here is simpler though as it piggybacks on already existing features, namely:
@@ -9499,53 +9422,130 @@The SetFeesMode
instruction and the fees_mode
register allow for the existence of JIT withdrawal.
JIT withdrawal complicates the fee mechanism and leads to bugs and unexpected behaviour.
The proposal is to remove said functionality.
Another effort to simplify fee handling in XCM.
The JIT withdrawal mechanism creates bugs such as not being able to get fees when all assets are put into holding and none left in the origin location. This is a confusing behavior, since there are funds for fees, just not where the XCVM wants them. The XCVM should have only one entrypoint to fee payment, the holding register. That way there is also less surface for bugs.
-The SetFeesMode
instruction will be removed.
The Fees Mode
register will be removed.
Users will have to make sure to put enough assets in WithdrawAsset
when
previously some things might have been charged directly from their accounts.
This leads to a more predictable behaviour though so it will only be
a drawback for the minority of users.
Implementations and benchmarking must change for most existing pallet calls that send XCMs to other locations.
-Performance will be improved since unnecessary checks will be avoided.
-JIT withdrawal was a way of side-stepping the regular flow of XCM programs. By removing it, the spec is simplified but now old use-cases have to work with the original intended behaviour, which may result in more implementation work.
Ergonomics for users will undoubtedly improve since the system is more predictable.
-Existing programs in the ecosystem will break. The instruction should be deprecated as soon as this RFC is approved (but still fully supported), then removed in a subsequent XCM version (probably deprecate in v5, remove in v6).
-The previous RFC PR on the xcm-format repo, before XCM RFCs were moved to fellowship RFCs: https://github.com/polkadot-fellows/xcm-format/pull/57.
None.
The new generic fees mechanism is related to this proposal and further stimulates it as the JIT withdraw fees mechanism will become useless anyway.
+ +Table of Contents
+ +secp256r1_ecdsa_verify_prehashed
Host Function to verify NIST-P256
elliptic curve signaturesStart Date | 16 August 2024 |
Description | Host function to verify NIST-P256 elliptic curve signatures. |
Authors | Rodrigo Quelhas |
This RFC proposes a new host function, secp256r1_ecdsa_verify_prehashed
, for verifying NIST-P256
signatures. The function takes as input the message hash, r
and s
components of the signature, and the x
and y
coordinates of the public key. By providing this function, runtime authors can leverage a more efficient verification mechanism for "secp256r1" elliptic curve signatures, reducing computational costs and improving overall performance.
“secp256r1” elliptic curve is a standardized curve by NIST which has the same calculations by different input parameters with “secp256k1” elliptic curve. The cost of combined attacks and the security conditions are almost the same for both curves. Adding a host function can provide signature verifications using the “secp256r1” elliptic curve in the runtime and multi-faceted benefits can occur. One important factor is that this curve is widely used and supported in many modern devices such as Apple’s Secure Enclave, Webauthn, Android Keychain which proves the user adoption. Additionally, the introduction of this host function could enable valuable features in the account abstraction which allows more efficient and flexible management of accounts by transaction signs in mobile devices. +Most of the modern devices and applications rely on the “secp256r1” elliptic curve. The addition of this host function enables a more efficient verification of device native transaction signing mechanisms. For example:
+This RFC proposes a new host function for runtime authors to leverage a more efficient verification mechanism for "secp256r1" elliptic curve signatures.
+Proposed host function signature:
++#![allow(unused)] +fn main() { +fn ext_secp256r1_ecdsa_verify_prehashed_version_1( + sig: &[u8; 64], + msg: &[u8; 32], + pub_key: &[u8; 64], +) -> bool; +}
The host function MUST return true
if the signature is valid or false
otherwise.
N/A
+The changes are not directly affecting the protocol security, parachains are not enforced to use the host function.
+N/A
+The host function proposed in this RFC allows parachain runtime developers to use a more efficient verification mechanism for "secp256r1" elliptic curve signatures.
+Parachain teams will need to include this host function to upgrade.
+Table of Contents