Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First Proposal: Mainnet Parameters #4300

Closed
bedeho opened this issue Sep 19, 2022 · 19 comments
Closed

First Proposal: Mainnet Parameters #4300

bedeho opened this issue Sep 19, 2022 · 19 comments
Labels

Comments

@bedeho
Copy link
Member

bedeho commented Sep 19, 2022

Text was too long for Github issue, made Gist: https://gist.github.com/bedeho/1b231111596e25b215bc66f0bd0e7ccc

This proposal badly needs review for outright mistakes and omissions, keep discussion here, not in Gist.

@bedeho bedeho added the mainnet label Sep 19, 2022
@mochet
Copy link

mochet commented Sep 19, 2022

My notes after an initial read, I don't know much about staking so didn't really get into that deeply. Will rereview after a bit to let information settle and will take a deeper look at some other things

Question: For some/all of these parameters to change, is a runtime upgrade required?

One of the primary learnings from our past testnets is that there is a significant risk of free-riding and coordination failure within the council. This has been demonstrated across a broad range of values, from 20 down to 5.

  • This part isn't fully true, with KPIs we had a lot more activity. So I guess depending upon the model that mainnet users decide to utilize this could impact things a lot. It is also the case that we didn't have all WGs at any time during testnets (membership + GW WGs were never used and some earlier testnets did not have operation WGs from what I can remember)

  • Council size: I think 3 is too low. It creates a higher risk of inactivity causing issues and creates an easier way for an attacker to gain full control of the council (since they basically only need 2 seats). I would think 5 should be the minimum.

  • Council terms: I have mixed feelings about the 3 week duration and my feeling is it is a bit too long. I'd be more in favor of a 2 week term due to pacing, I feel any longer may result in slower governance and less interest in the platform.

  • Curator + Forum + Membership WGs: In general I think these in particular are going to be quite unpredictable as we may end up having many lower cost actors in relation to activity. This may skew parameters as the leaders may command higher payments due to how much work is required for managing these WGs.

  • HR WG: I am not sure if this WG would make a lot of sense on mainnet so I am not certain it would be used for what it does currently (IMHO)

  • Marketing WG: There is a huge amount of work to be done in terms of documentation and other resources, so depending upon how efficient the WG is at doing this, it may be that this WG requires quite a bit more funding.

  • Infrastructure: Things like QNs and RPC nodes can apparently cost quite a lot of money based on figures I've seen in some places. Keep in mind that the DAO may have to pay for integrations to be made to connect with some services in the first place, plus ongoing usage costs (can attach some example proposals if needed). These tools will need to be funded and possibly run as public good services in order to improve developer/gateway adoption of the platform.

  • Validator count limit: Is there any reason we shouldn't have this as 300? validator interest seems like something we could leverage for more interest/free advertising depending on platform utilization.

  • Worker limits in WGs: Is 30 not a bit too low for certain WGs? My feeling is that with enough work we may well find easy adoption and support for the platform by hiring more people. I think certainly for the curator WG that 30 may really not be enough.

@mochet
Copy link

mochet commented Sep 19, 2022

Infrastructure cost examples:

We may want to run this kind of infrastructure for the benefit of potential gateway operators, but given our use case I expect the costs involved may be higher.

Note: for this kind of infrastructure it should be assumed that there may be initial costs involved with deployment/integration and they may be pivotal to ensuring initial platform growth.

@Lezek123
Copy link
Contributor

Lezek123 commented Sep 26, 2022

Market Cap and Resource Pricing

The key purpose of fees is to deter possible abuse of the liveness of the system by hostile actors, there is also a secondary purpose of rationing access among the most highest value uses, but this can be done with a tip market which needs no lower bound pricing.

Such an attack would essentially boil down to exhausting the throughput of the system for some period of time. It is not feasible to really fully anticipate the external payoff that an attacker could derive from such an attack, hence one just has to make baseline assumptions which seem plausible based on the context at hand. For our chain I believe it is safe to assume that we only need to deter "vandalism", where primary payoff is just hedonic or reputational. To make things concrete, let us just assume that this type of attacker has willingness to pay of $10,000 USD/day to entirely block the transactional activity in the system.

Other things to consider:

Short selling

I think the assumption that primary payoff for an attacker is hedonic or reputational is extremely optimistic.
As soon as there is any reasonibly-sized market for JOY, an attacker would be able to short-sell JOY on leverage for profit,
which I think would be the main motivation.

In order to try to model the potential profit from such attack I picked a few cryptocurrencies with a market cap close to $60,000,000, that also belong to the Polkadot ecosystem and used Coingecko's market data, specifically "-2% Depth" column, to estimate the amount of each token that can be sold without affecting the price by more than 2%.

Cryptocurrency Market cap Volume (24h) Sum of top10 "-2% depth" values
Moonriver $58,210,000 $3,060,000 $810,000
Efinity $60,454,000 $1,708,000 $180,000
Bifrost $66,472,000 $412,000 $107,000

As shown in the table, it takes on average a ~$365,000 trade to drop the price of a token by 2%.
I'm going to use a number $500,000 for simplicity and also to be closer to the "safe side".

My assumption is therefore: Given a current price of JOY p, I can sell ~$500,000 of JOY without dropping the price of JOY lower than to 0.98 * p. If I short sell $500,000 of JOY my average entry price will be ~0.99 * p in this case.

Now to estimate how much a network congestion attack can affect the price of a cryptocurrency I've chosen a few known incidents that caused significant network distruptions:

Incident Start time End time Start price End price Total price impact Period Avg. price impact / day
Solana DDoS-related outage 14 Sept 2021 12:00 UTC 15 Sept 2021 05:00 UTC $165.16 $155.52 -5,8% 0.7 days -8.3%
Helium blockchain halt 15 Nov 2021 22:00 UTC 17 Nov 2021 02:35 UTC $48 $43.5 -9,4% 1.2 days -7,8%
Acala AUSD incident leading to partial functionality freeze Aug 14 2022, ~00:00 UTC Now: Sept 25 2022, 00:00 UTC
(most of the functionality remains frozen)
$0.314 $0.21 -33.1% 42 days -0.79%
Ethereum denial-of-service attack 22 Sept 2016 17 Oct 2016 $13.23 $11.97 -9,5% 26 days -0,37%

Although it's hard to draw any definitive conclusion from this data, it appears as though attacking the network over a period of time longer than 1 day is not optimal from the perspective of the attacker, as the time / price impact ratio is generally much lower for prolonged attacks.

It also seems like the price impact of making a blockchain unusable for about a day can be as high as ~8%.

With those assumptions we can calculate how much an attacker could potentially profit from a 1-day network congestion attack on a Joystream blockchain:

  1. Let's say p is the initial price of JOY before the attack.
  2. First the attacker short sells $500,000 JOY, causing the price to drop to 0.98 * p. The average entry price is 0.99 * p.
  3. Then the attacker floods the network for one day causing the price to drop by 8%, to 0.92 * p
  4. Let's say now the attackers buys-back the JOY at average price of 0.93 * p (we assume same market depth for buy and sell trades)
  5. The attacker gains at most ~$500_000 * 0.99 / 0.93 - $500_000 = ~$32,000 from the attack

Of course this is almost a worst-case scenario, we're not taking into account the exchange fees and other costs associated with holding the short position, as well as whether the attacker would be willing to take the risk (as the effect on the price is uncertain).

Given this analysis, I think it is reasonable however to make the cost of such attack at least $30,000 /day.

Archival / validator node storage costs

This is very important factor and something we need to consider not only in context of an attack, but
also organic growth.

A day of full blocks (not counting operational extrinsics etc.) is: 3.75MB * 14400 = 54 GB

This is a problem not only because it increases storage requirements for validators, but also because it causes centralization and longer sync times.

However, if we focus just on storage, 1 GB of data on Amazon (I'm assuming EBS volume, since it's unclear whether S3 will have sufficient IOPS) will cost at least $0.08 / month, so each day of full blocks adds $4.32 / month of storage costs for each validator.

Assuming we have a maximum of 100 validators, that's $432 / month or $5,184 / year.

The cost increase may be even greater for Hydra Indexers / Squid Archives, since they store
all transactions and events in a less compact format and are also indexing this data to optimize
SQL queries (which requires additional storage), but for now I'm going disregard this, as it is unclear
how the people who run this kind of infrastructure will be incentivized.

For validators the incentivization proposal is clear - they earn a fixed amount of JOY a year (3%) + some percentage of transaction fees.

Transaction fees that get burned don't increase the value of validators yearly rewards directly,
they may only increase the value of their stake/holdings, but it's impossible to calculate the effect of this.

Therefore if we want the network to be suistainable, I think it's inevitible to give some percentage
of transaction fees to validators and adjust the parameters in such a way, that 1 day of full blocks equals
to at least $5,184 of total validators' earnings (which would make their operation sustainable for at least a year).

There are 2 parameters we can tweak to achieve that:

  • p - % of transaction fees going to validators
  • f - fee / byte

Having that in mind, I'll describe my proposal for both of those values later.

Polkadot/Kusama values

Tx fee / byte

Both Polkadot and Kusama use TransactionByteFee of 10 milicents. Of course Polkadot's hardcoded price assumption (1 DOT = $1) is a much more pessimistic than Kusama's (1 KSM = $300), so the real-world costs are currently much different, but I'm going to disregard the current market conditions.

Based on calculations from the previous sections, our fee should be at least $30,000 / 54 GB = ~0,056 milicents / byte.

This is a 180x lower value than 10 milicents, which could be a sign that it is underestimated.

Weight fee

Polkadot's/Kusama's weight fee is 1 cent / (10 * extrinsic_base_weight) per unit where extrinsic_base_weight = 85_212_000.
This means 1.174 nanocents / unit

Based on calculations from the previous sections, our fee should be at least $30,000 / (14400 * block_weight_limit_for_normal_dispatch) per unit, which is
$30,000 / (1.5 * 10^12 * 14400) = 0.139 nanocents / unit

This is ~8,5 lower value than 1.174 nanocents, which could be a sign that it is slightly underestimated.

Margin

We need to take into account some margin, as our calculations were based on certain assumptions, like:

  • Market cap of JOY being $60,000,000
  • JOY market having a certain liquidity
  • Network cogestion having a certain impact on JOY price
  • A certain cost of storage / GB for archival nodes
  • A certain number of archival nodes

For example, if the market cap of JOY were to drop to $20,000,000, while our other assumptions would remain intact,
suddenly an attack decribed in the first subsection would become profitable.

As also shown in the previous subsection, the fees I estimated are still extermely low compared to the ones of Polkadot/Kusama.

Having that in mind I suggest to take a safety margin by multiplying each fee at least 3-5 times to account for bad assumptions
(especially the one about market cap, since crypto markets tend to be very volatile)

Summary: My proposal

Taking all this into account I propose the following parameters:

  • Byte fee: 0.2 milicents / byte (this is still ~50x less than Polkadot/Kusama and ~4x more than the value calculated from $30,000 / day scenario)
  • Weight fee: 0.6 nanocents / unit (this is still ~2x less than Polkadot/Kusama and ~4x more than the value calculated from $30,000 / day scenario)
  • Percentage of fees going to validators: 20% (since the cost of running archival node increases by ~$5,184 / year per day of full blocks, as calculated in one of the previous sections, giving validators 20% of the fees and therefore ~$24,000 per day of full blocks with the currently proposed values seems like more than enough to account for those costs)

Why is it ok for the byte fee to be 50x less than Polkadot's/Kusama's if weight fee is just 2x less?

I think it all goes down to how many validators we want to have as described in Archival / validator node storage costs section.

For example, Polkadot and Kusama want to maximalize the number of validators. Kusama currently has 1000 validators, while in my calculations I used the max of 100.

Weight fee on the other hand shouldn't really depend that much on the number of validators, as the cost of running a validator doesn't increase proportionally to the weight of blocks being produced.

Proposal parameters

Here I'd like to specifically bring your attention to this setup:

        approval_quorum_percentage: TWO_OUT_OF_THREE,
        approval_threshold_percentage: HALF,

Because the approval quorum is greater than the approval threshold, this means there's no point in ever voting "Reject" for a proposal with those parameters, since voting "Abstain" has much more power than voting "Reject" in this case (because "Abstain" votes don't count to the quorum).

Assuming a council has 3 members, let consider this example:

  1. I'm the first voter and I vote "Reject"
  2. 2nd CM comes in and votes "Approve"
  3. The proposal passes, since quorum is reached (2 members voted) and approval threshold is also reached (50% of votes are "Approve" votes)

Now let's consider a different example:

  1. I'm the first voter and I vote "Abstain"
  2. 2nd CM comes in and votes "Approve"
  3. Approval quorum is not reached, so 3rd member decides, if (s)he votes anything other than "Approve", the proposal is rejected

This will hold true regardless of the number of council members, if approval quorum is harder to reach than the threshold,
"Abstain" will always be a more powerful vote than "Reject". With 10 CMs it would take just 4 "Abstain" votes, but no less than 6 (!) "Reject" votes to reject a proposal with those parameters.

My suggestion would be to therefore replace those parameters with:

        approval_quorum_percentage: TWO_OUT_OF_THREE,
        approval_threshold_percentage: TWO_OUT_OF_THREE,

Which makes the dynamic much clearer: It basically requires at least 2 Approve votes. If 2 out of 3 CMs vote for anything other than Approve, the proposal will not pass.

Council & Elections & Proposals

  • There should be a very small council, specifically with 3 members.

I agree with @mochet that the number seems low. It makes council a very centralized body, very dependent on a single actor.
I don't think I have a good-enough argument against this particular number, but if we were to go with it, I think the council periods should be shorter, because in that case bad actors can be replaced quickly and will have a more limited impact (for example, they wouldn't be able to stale key proposals for weeks). I expand on this thought further below.

(Council term) Total: 21 days = 3 weeks

I think there are multiple issues to be considered w.r.t. longer council terms:

  1. Council members have a very limited accountability currently, they cannot be slashed (ever) or replaced (before their term ends). They also recieve high rewards that cannot be customized on per-member basis. The longer the council term the less incentieve there is for a council member to do anything at all, unless their own stake is so extemely high that they would actualy feel the price impact that their idleness is causing.
  2. Proposals with higher constitutionality (3-4) will take forever to execute. With current set of proposal parameters the runtime upgrade (which in worst case scenario could be an important bug fix that cannot wait very long) will take at least ~2 months to ship. Now considering this proposal also requires all CMs to vote "Approve", just one CM during those 4 council terms can delay an important upgrade by another ~2 months.

On the other hand I don't think there is that much cost associated with making council terms shorter. It seems reasonable to assume that competent council members with a great long-term vision will just get re-elected over and over. With time they will also gain the advantage of well established reputation in the community and it's very unlikely they'll become replaced unless they really mess something up.

Therefore my suggestion would be to:

  • Make constitutionality for runtime upgrade no higher than 3. This means at best it takes 1 council term + grace period to ship it and ~3 council terms at worst (unless rejected in the meantime), as opposed to 2-4. This means almost 2x faster best case scenario.
  • Lower total council period to 7-14 days, I'd even opt for 7 at the beginning, at least until we have a better accountibility mechanism.

Budgets & Spending and Inflation

Content, Membership, Forum, HR, Marketing

Content, Membership, Forum, HR, Marketing WGs: These groups have to start very modestly

I'm not sure I would put Content Working Group in the same bucket as the other ones mentioned here (membership, forum, HR, marketing).

I think Content Working Group has an important role to play since the very beginning, it also has a lot of responsibility given its on-chain privileges. Things can easily go astray if we don't have enough responsive curators to watch over the platform and prevent it from being flooded with inappropriate content.

Realistically I think there should be at least 5-10 curators being paid $1,000 - $2,000 a month + a lead with a salary of at least ~$5,000, therefore I suggest a $15,000 / month budget for the content group.

Storage and distribution

Storage WG: Lets say there are 10K active channels, each one with 10 videos, each of 20min length at 1080p (H264) i.e. 10GB, in total is 10,0001010GB = 1PB, which with replication factor 5 is 5PB in total storage. With storage cost of $20K per month per 1PB, including basic bandwidth budget, this equals $100K

Distribution WG: Lets say there are 10,000 DAUs each consuming about 10mins of content per day, this is 10000x5GBx30=1500000GB egress from distributors per month, and at price of $0.02/GB on AWS this works out to $30K

I think for the purpose of estimating the budget for those groups we need to:

  • establish some kind of timeline, as the infrastructure requirements will vary greatly over time and so will the price of JOY. In my opinion it makes no sense to plan more than a year ahead and council can adjust the budget increments at any point through a proposal to deal with this.
  • have some picture of what's the relationship between the amount of content stored vs. distributed by an "average video platform"

For the purpose of my calculations I'll assume 10,000 DAUs each consuming about 10mins of content per day (as put in the original propsal) is the average daily activity we strive to reach in the first year of existence.

Content uploaded vs content consumed

I had a suspicion that the proposed budget for the storage group may overestimated, so to get a better perspective I collected some statistics about the existing video platforms, as summarized in the table below:

Video platform Content uploaded/streamed daily (avg.) Content watched daily (avg.) Uploaded / watched ratio Stats from Stats source
YouTube 576,000h 1,000,000,000h 1 / 1736 February 2017 https://en.wikipedia.org/wiki/YouTube
Rumble 6,158h 5,833,333h 1 / 947 August 2022 https://finance.yahoo.com/news/rumble-sets-time-records-across-200000301.html
Vimeo 350,000 videos 23,833,000 videos 1 / 68 2018 / 2021 https://expandedramblings.com/index.php/vimeo-statistics/
(note that here monthly views are oudated by ~3 years compared to videos uploaded)
Peertube 1,130 videos 56,400 videos 1 / 50 Jul 2022 - Sept 2022 https://instances.joinpeertube.org/instances/stats
Twitch 2,220,709h 62,838,064h 1 / 28 August 2022 https://twitchtracker.com/statistics/stream-time

Based on those statistics I think the best upload / distribution ratio to use for my calculations will be 1/50.
I think ratios like those of YouTube or Rumble are only achieveable for services that:

  • Already exist for quite some time (ie. they have already already built a huge library of content)
  • Are way more consumer-focused than creator-focused

I think Twich & Peertube kind of confirm that theory, Peertube being a relatively new invention, while Twich being primarly focused on livestreaming, making the impact of acumulated content less pronounced.

This means that if we assume 1,5 PB being consumed monthly on average during the first year of Joystream existence, we should assume about 1 / 50 * 1,5 PB = 30 TB of videos being uploaded monthly.

Calculating storage and distribution infrastructure costs

Normally I would expect some kind of exponential growth when it comes to storage space required (due to increasing adoption), but even if we assume it happens linearly, so there is actually 30 TB of content upladed every day, with the replication factor of 5, the storage nodes only need on average ~1 PB of capacity during the first year (ie. in the first month they need 5x30 TB, in the 2nd month - 5x60 TB and so on). That's 1 PB of average capacity for storage system as a whole.

Here are a few things that I think were overlooked in the original proposal:

Some of those calculations depend on the total number of storage / distribution nodes and workers. In my calculations I'm assuming there are 30 storage nodes and 100 distributor nodes and the number of workers is the same as the number of nodes in each group.

  1. The transfer out of AWS (egress) is not $0,02 / GB, but actually $0,05 - $0,09 / GB (source 1, source 2). If all distributors together transfer 1,5 PB out of AWS per month, then each individual distributor transfers on average 15 TB / month, which means almost all costs end up in $0,09 / GB tier. After giving it some thought, however, I think AWS may just not be the best choice for our distributors. If egress is our main concern, they are way cheaper alternatives, like DigitalOcean, OVHCloud or Linode, just to name a few. All those providers offer a substantial amount of transfer for free (~5TB on average) and just $0,01 per GB of transfer above the monthly free limit. This means the transfer costs can be as low as $100 / month per node, giving us a total transfer cost of $10,000 / month (assuming 100 nodes).

  2. Distributors also need storage, In case total size of Joystream content directory is ~200TB on average during the first year (1 PB / 5x replication factor), the distributors probably need at at least ~2 TB (1%) of cache storage to work efficiently. If we assume each distributor uses a separate block storage (since high throughput and latency is required), which costs ~$0,1 / GB-month (AWS, Linode, DigitalOcean), that gives us $200 per distributor per month, which is $20,000 / month assuming 100 distributors.

  3. Storage nodes also need bandwidth: In context of CDN's we're usually talking about cache hit ratio of ~90-95%, which means a cache miss ratio of 5-10%. If we assume our distributor's cache is "good enough", we should probably get ~10% miss ratio. Each cache miss is a request from distributor node to storage node. This implies storage nodes also need to worry about egress costs, as in worst case scenario they will pay ~$0,09 / GB (as described in 1.), which is $13,500 / month. However, since we already ruled out AWS as a feasible choice for distributors, I'll do the same here and just assume that ~5 TB of outbout transfer a month is within a free tier of whichever cloud provider (if any) they end up choosing, meaning that during the first year the cost of transfer is $0 / month.

  4. Storage/distributor nodes need CPU and memory: Each storage and distributor node needs to be ran on an instance with specs like at least those of AWS t3.large instance. The cost of such instances are compareable between different cloud providers and usually ~$60 / month. If we assume there are 30 storage nodes and 100 distributor nodes, this gives us $1,800 / month for storage WG and $6,000 / month for distribution WG.

  5. There are supervision/administration costs involved. Ideally storage/distributor nodes would just be started and forgotten about, but in reality they will probably need some supervision, maintenance, manual work (like updates, fixes) etc. Also, if storage/distributor workers earn just enough to cover the costs of their infrastructure, there would be no incentive for them to take the job. I think it's reasonable to assume that each worker will need to earn at least ~$1,000 / a month and the lead will need to earn substantially more (like $5,000 - $10,000), as (s)he would be responsible for managing all the workers, constantly monitoring the storage infrastructure etc. With our assumptions about the number of nodes/workers this gives us an additional $110,000 for distributors and $40,000 for storage working group.

Strage and distribution budget: My proposal

Based on my calculations I propose following budget parameters for storage and distribution working groups:

Storage WG

  • Average size of data stored during the first year: 200 TB (of unique data) * 5 (replication factor) = 1 PB
  • Cost of storing 1 PB / month: ~$20,000
  • Cost of 150 TB transfer / month (spread across 30 nodes): ~$0 (assuming it's within free tier)
  • Cost of instances / month (assuming 30 nodes): $1,800
  • Worker & lead payouts / month (assuming 30 workers): $40,000
  • Total budget / month: $61,800

Distributor WG

  • Cost of transfering 1,5 PB / month (assuming 100 nodes): $10,000
  • Cost of high throughput cache storage / month (100 x 2TB): $20,000
  • Cost of instances / month (assuming 100 nodes): $6,000
  • Worker & lead payouts / month (assuming 100 workers): $110,000
  • Total budget / month: $146,000

The key aspect of my proposal is that the budget for distributor working group is actually significantly higher than the one for storage working group, which I think is reasonable considering it's assumed to have much more workers.

Creators & creator payouts

Creators payouts: Lets say there are 1K creators involved, each receiving $500 on average/month, this nets out to $500K (omg!).

I think it general it is a good strategy to estimate budgets for different groups based on the set of assumption that were already made w.r.t. other groups, in this case storage and distribution group specifically.

For example, we assumed that on average 1,5 PB of content is consumed each month during the first year. If we take and average length of video
of ~10 min (for example YouTube has ~5 bln views per day and ~1 bln hours of content watched per day, which gives 12m per view), which is ~200 MB (full HD), this gives us ~7,500,000 views / month.

Youtube creators earn ~$5 per 1000 views (source), which
means if we want to be competitive we should probably offer up to ~$10 / 1000 views.

With those assumptions realistically creator payouts could be as low as ~$75 000 / month

Total monthly budget & inflation

My proposal

Using my proposed estimation of budget spendings, we get:

  • Council: $30,000 (unchanged)
  • Spending proposals: $5,000 (unchanged)
  • Content: $15,000 (+ $13,000)
  • Membership / Forum / HR / Marketing: $8,000 in total (unchanged)
  • Gateway: $30,000 (unchanged)
  • Builders: $50,000 (unchanged)
  • Storage: $62,000 (- $48,000)
  • Distribution: $146,000 (+ $106,000)
  • Creators: $75,000 (- $425,000)
  • Stakers: $150,000 (unchanged)
  • Total / month: ~$570,000
  • Annual inflation rate: ~11.4%

How sustainable is this?

Ideally we would need to burn more or less the same amount of JOY that we mint or just slightly less in order for the price of JOY to be sustainable over time.

We've made some assumptions about content comsumption and storage, one of which was that there is about ~1 TB of videos uploaded daily.
We've also made an assumption before that 1 video view is equal to ~200 MB of content being consumed.
Similarly, we can assume average video size is also ~200 MB (for example, average video length on
YouTube is ~12 minutes
).

This means there are ~5,000 videos uploaded daily.
Each video upload is a transaction with inclusion fee of ~1,3 cent
(I'm assuming the byte fee and weight fee I proposed in Market Cap and Resource Pricing section, I'm also using weights based on https://github.com/Joystream/joystream/tree/carthage-weights-gen, 10 storage buckets / channel, and some standard metadata length)

Out of this 1,3 cent, 80% is burned, which gives us ~1 cent burned per video.

With 5k videos uploaded daily that's just $50 of JOY burned per day for video upload inclusion fees.
That's almost nothing compared to how much we actually need to burn per day (~$19,000).

Another source of burning is membership fees.
According to some YouTube statistics, given ~800 mln videos, there are ~33 mln active channels and ~122 mln daily active users. This can give us a pretty rough estimate of the number of members beeing ~6-20x lower than the number of videos. For simplicity I'll assume it's 10x lower, in that case we would have ~500 new members daily. With a reasonable membership fee of ~$1 this is still only $500 / day.

Other sources of burning include:

  • Other transactions inclusion fees
  • Data size fees
  • CRT sale platform fees
  • NFT sale platform fees

For the last 2 it's very hard to estimate the number of tokens burned, but with 2% platform fee we would need a volume of ~$950,000 / day to burn $19,000 of JOY/day, which to me seems very unlikely to achieve with our assumptions about usage within the first year. As to inclusion fees of other transactions - even if we were to assume that video creation transactions burns only constitute 1% of all transaction burns (which seems very optimistic), we'll still get $5,000 / day, which is only 25% of what we need.

That leaves us with data size fee as our last hope to counter high inflation. Ideally they should cover at least ~$10,000 of burns per day.

How much should the data size fee be?

My answer would be: As high as we can tolerate. It's not hard to calculate that we would need an average (200 MB) video upload to cost ~$2 to fully cancel the inflation. This may seem high, but note that even if each person uploading a video were to just pay the cost of storing it for one year with the replication factor of 5, it would cost them ~25 cents to upload 200 MB video, therefore I believe we cannot make it very cheap regardless. I think we should make the cost of uploading a 200 MB video somewhere in between $0,25 and $2.

My suggestion: Make data fee $0,003 / MB. This is ~$0,6 for a 200 MB video, which doesn't sound that terrible. It also means that given 1 TB of content uploaded daily there is ~$3,000 of JOY being burned each day on data size fees. Although this is just 15% of what we would need to burn to cancel inflation, it would at least be a start before we get the gateway payments rolling.

State bloat bond pricing

The cost of storing AccountData

// The price applied to
// We assume that if balances map is protected purely by the `ExistentialDeposit`, then
// this would be sufficient for any other storage map as well. In fact, the `balances`
// pallet this is also covering the storage cost of `AccountData` which currently has form
//
// pub struct AccountData<Balance> {
//  pub free: Balance,
//  pub reserved: Balance,
//  pub misc_frozen: Balance,
//  pub fee_frozen: Balance,
// }
//
// This has `sizeof(AccountData<u128>)` = `4*sizeof(u128)` bytes = `512` bytes, hence
// and implied upper bound for the variable cost would be `ExistentialDeposit/512`,
// so we use that
pub const FixedBloatBondPrice: Balance = ExistentialDeposit::get();
pub const PerByteBloatBondPrice: Balance = ExistentialDeposit::get().saturating_div(512);

The calculation is a bit off here:

  1. u128 is 16 bytes (128 bits = 16 bytes)
  2. sizeof(AccountData<u128>) is therefore 64 bytes
  3. We also need to factor in the size of the key in frame_system::Account map, which is Twox128(Prefix::pallet_prefix()) ++ Twox128(Prefix::STORAGE_PREFIX) ++ Hasher1(encode(key)). The Twox128 encoding result is 16 bytes, so that is 16 + 16 + Blake2_128Concat(encode(account_id)) bytes, so 16 + 16 + 48 = 80 bytes. 64 + 80 gives 144 bytes in total (I'll explain the map entry size calculation in more detail later in this review)
  4. Most importantly: our current value for ExistentialDeposit was chosen completely randomly, we cannot rely on it as an ultimate source of truth about what should be the cost of bloating the runtime state.

Reasonable value for ExistentialDeposit (and therefore state blond bonds)

So we know that ExistentialDeposit is a cost of storing 144 bytes in the state.

Polkadot's ExistentialDeposit is $1 (under their assumption of 1 DOT = $1)
Kusama's ExistentialDeposit is $0.01 (under their assumption of 1 KSM = $300)

Based on that and the supply of DOT and KSM (~1bln and ~10mln respectively), we can calculate the max. possible
state size (that was assumed when those values were hardcoded):

For Polkadot it's: 1,000,000,000 * 144 bytes = 144 GB
For Kusama it's: 30,000 * 10,000,000 * 144 bytes = ~43,2 TB

They seem very inconsistent, but considering Kusama was never meant to be a "safe" chain, I believe a safe value is closer to Polkadot's 144 GB. This is also the size of Ethereum's state about a year ago, when Vitalik called optimizing the state the next major challenge for Ethereum, so the value seems about right.

If we would want the same limit (144 GB) for Joystream, we will end up with a value like:

1,000,000,000 JOY = 144,000,000,000 bytes
1 JOY = 144 bytes
1 byte = ~0.007 JOY = ~42 milicents (under the assumption of 1 JOY = $0.06)

Now the size_of::<Channel<T>> for example is 320 bytes and the size of encoded ChannelById map key is 56 bytes (32 bytes prefix + Blake2_128Concat(channel_id)) which means channel bloat bond under those assumptions will be:

376 * 42 milicents = ~15.8 cents

This doesn't seem terrible, however I think there's some room here to make the fee even lower, as the 144 GB value that we chose was very restrictive.

I think it would still be safe if we were to make the fee 2-3 times lower.

My suggestion:

  • Existential deposit: 2 cents / 0.33 JOY
  • State bloat bond per byte (based on ED): ~13.9 milicents
  • Channel bloat bond (based on ED): ~5.2 cents
  • Video bloat bond (based on ED): ~4,3 cents

Calculating max cleanup inclusion fee

Unfortunately, this means we need to have estimates of the inclusion fees of such operations built into the configuration of the chain itself, which is hassle to remember to maintain. Ideally, these could be derived by just calling TransactionPayment::query_info in the configuration, but the problem is that this is not a const computation (e.g. it depends on fee multiplier in storage).

So first of all - it doesn't have to be constant in order for us to be able to use its result as a config parameter, for example we can do:

pub fn compute_fee(call: Call) -> Balance {
    let xt = UncheckedExtrinsic::new_unsigned(call);
    TransactionPayment::compute_fee(
        xt.encode().len() as u32 + 101, // I'm adding 101 bytes, as this is the length of a typical signature
        &<UncheckedExtrinsic as GetDispatchInfo>::get_dispatch_info(&xt),
        0,
    )
}

parameter_types! {
    pub PostDeposit: Balance = compute_fee(
        Call::Forum(forum::Call::<Runtime>::delete_posts {
            forum_user_id: <Runtime as MembershipTypes>::MemberId::zero(),
            posts: BTreeMap::from_iter(vec![(
                forum::ExtendedPostId::<Runtime> { category_id: 0, thread_id: 0, post_id: 0 },
                true
            )]),
            rationale: Vec::new()
        })
    );
}

The fee_multiplier starts at 1 and then is adjusted based on how congested the network is.
I think just using the default value of 1 (which assumes we operate at blocks being ~25% full) is reasonable.

The only problem here is that I'm using unsigned extrinsic to estimate the fee, which is smaller than signed extrinsic, so I need to manually add the length of a signature (xt.encode().len() as u32 + 101). This is just for simplification, because constructing a signed extrinsic is much more involved.

To comfirm this works as expected I used the CLI fee-profile command:

perfect

As you can see it works exactly as expected.

pallet_multisig and calculating storage entry size

// So for adding to `Multisigs` we are paying for
// `sizeof((AccountIdx[u8; 32] -> fixed_part[Multisig]))`
//  = `sizeof(AccountId)` + `sizeof([u8; 32])` + `sizeof(fixed_part[Multisig])`
//  = `sizeof(AccountId)` + `sizeof([u8; 32])` + `sizeof(BlockNumber || Balance || AccountId)`
//  = `sizeof(AccountId)` + `sizeof([u8; 32])` + `sizeof(BlockNumber)` + `sizeof(Balance)` + `sizeof(AccountId)`
//  = `32` + `32` + `32` + `128` + `32`
//  = `256`
//

This is incorrect. To actually calculate the cost of adding to Multisigs we need to look at the storage map type:

	#[pallet::storage]
	pub type Multisigs<T: Config> = StorageDoubleMap<
		_,
		Twox64Concat,
		T::AccountId,
		Blake2_128Concat,
		[u8; 32],
		Multisig<T::BlockNumber, BalanceOf<T>, T::AccountId>,
	>;

This implementation tells us that:

  • First key is AccountId and is encoded using Twox64Concat.
    Twox64Concat encoding of AccountId produces an output of size: size_of::<AccountId>() + 8 bytes (40 bytes)
  • Second key is [u8; 32] and is encoded using Blake2_128Concat.
    Blake2_128Concat encoding of [u8; 32] produces an output of size: size_of::<[u8; 32]>() + 16 bytes (48 bytes)

Now the final key of a double map looks like this:

/// Each value is stored at:
/// ```nocompile
/// Twox128(Prefix::pallet_prefix())
/// 		++ Twox128(Prefix::STORAGE_PREFIX)
/// 		++ Hasher1(encode(key1))
/// 		++ Hasher2(encode(key2))
/// ```

Which means we need to add the size of Twox128(Prefix::pallet_prefix()) and Twox128(Prefix::STORAGE_PREFIX)
to the values we just calculated, which gives us: 16 + 16 + 40 + 48 = 120 bytes.

If we look at an example key from Multisigs map using https://polkadot.js.org/apps/#/chainstate:

0x7474449cca95dc5d0c00e71735a6d17d3cd15a3fd6e04e47bee3922dbfa92c8d3e73123ebcdee9161cbd2d43530a44705ad088af313e18f80b53ef16b36177cd4b77b846f2a5f07cff0f22492f44bac4c4b30ae58d0e8daa0000000000000000000000000000000000000000000000000000000000000000

We can see that it's indeed 120 bytes (240 hex characters)

Now if we look at the type definition of Multisig, it's:

pub struct Multisig<BlockNumber, Balance, AccountId> {
	/// The extrinsic when the multisig operation was opened.
	when: Timepoint<BlockNumber>,
	/// The amount held in reserve of the `depositor`, to be returned once the operation ends.
	deposit: Balance,
	/// The account who opened it (i.e. the first to approve it).
	depositor: AccountId,
	/// The approvals achieved so far, including the depositor. Always sorted.
	approvals: Vec<AccountId>,
}

pub struct Timepoint<BlockNumber> {
	/// The height of the chain at the point in time.
	height: BlockNumber,
	/// The index of the extrinsic at the point in time.
	index: u32,
}

So the size_of::<Multisig> is:

  • size_of::<Timepoint<BlockNumber>> which is size_of::<BlockNumber> + size_of::<u32> so 4 + 4 = 8 bytes (remember u32 is just 4 bytes, since 32 refers to the number of bits) +
  • size_of::<Balance> which is 128 / 8 = 16 bytes +
  • size_of::<AccountId> which is 32 bytes

Which totals to 8 + 16 + 32 = 56 bytes.

Together the size of the Multisigs double map key + the size of value is therefore 120 + 56 = 176 bytes

For all other maps we need to follow a similar pattern when calculating the size of an entry.

This is a common mistake in this proposal, so I won't point it out in all the other places where it occurs, unless it results in some broken assumptions.

Staking:

        /// the minimal amount to be rewarded between validators, expressed as a fraction
        /// of total issuance. Known as `I_0` in the literature. Expressed in millionth, must be between 0
        /// and 1_000_000.
        min_inflation: percentage_to_millionth(3),

        /// the maximum amount to be rewarded between validators, expressed as a fraction
        /// of total issuance. This is attained only when `ideal_stake` is achieved. Expressed in
        /// millionth, must be between min_inflation and 1_000_000.
        max_inflation: percentage_to_millionth(3),

With min_inflation exactly the same as max_inflation don't we effectively end up with a flat line?
It seems like other parameters (like ideal_stake) have no meaning in this case.

council

pub const MinCandidateStake: Balance = dollars!(10_000);

As I already mentioned, I'm not a fan of 21 days council periods, but this is another reason.
If a bad actor ends up in council with just minimum stake and the proposed parameters of 21 days council term and a $10,000 reward a month, they're almost doubling their stake during the term, no matter what they do or don't do during said term.

  • next_budget_refill
    • Type: T::BlockNumber
    • Description: The next block in which the budget will be increased.
    • Values: 0

It's not a good idea to set this value to 0.
It's used during on_initialize, where it's compared against the current block number, which is never 0:

    // Checkout elected council members reward payments.
    fn try_process_budget(now: T::BlockNumber) {
        // budget autorefill
        if now == NextBudgetRefill::<T>::get() {
            Self::refill_budget(now);
        }

        // council members rewards
        if now == NextRewardPayments::<T>::get() {
            Self::pay_elected_member_rewards(now);
        }
    }

Although we can away with next_reward_payments being 0, because it's also modified when new council is elected:

    fn elect_new_council(elected_members: &[CouncilMemberOf<T>], now: T::BlockNumber) {
        // ...
        // try to pay any unpaid rewards (any unpaid rewards after this will be discarded call)
        Module::<T>::pay_elected_member_rewards(now);

If we set next_budget_refill to 0, the budget will never refill.

Suggestion: Change the value of next_budget_refill in genesis config to 1 or leave it as it is now (<Runtime as CouncilTrait>::BudgetRefillPeriod::get())

members

    pub const DefaultMembershipPrice: Balance = cents!(5);
    pub const ReferralCutMaximumPercent: u8 = 50;
    pub const DefaultInitialInvitationBalance: Balance = dollars!(5);

Notice that membership pallet has this constant hardcoded outside of the Config:

pub(crate) const DEFAULT_MEMBER_INVITES_COUNT: u32 = 5;

This means that by buying membership for 5 cents we get 5 invites.

If each of these invites can produce a new membership with $5 of invitation-locked funds, this means we can very cheaply acquire a lot of locked JOY, which can then be used to spam the blockchain (as invitation lock can pay for tx fees), or for voting.

For example, if I buy 1000 memberships for just $50, I'll get 5000 invites and can effectively get $25,000 of invitation-locked JOY. I can then use this locked JOY to spam the blockchain with full blocks for almost an entire day(!)

Therefore I think it only makes sense for DefaultInitialInvitationBalance to be no more than DefaultMembershipPrice / DEFAULT_MEMBER_INVITES_COUNT.

My suggestion:

  • DefaultInitialInvitationBalance: 50 cents
  • DefaultMembershipPrice: $1
  • DEFAULT_MEMBER_INVITES_COUNT: 2

Individual members can recieve more invites provided they are not abusing this feature.

working_group

    pub const MaxWorkerNumberLimit: u32 = 30;

As already pointed out by @mochet, 30 seems very low.
I think this especially applies to DistributionWorkingGroup, unless we want each member to run at least 3-4 nodes?

// Setup all reward payouts to be happening in a cascade of 10 blocks apart.
 pub const ForumWorkingGroupRewardPeriod: u32 = days!(1) + 10;
 pub const StorageWorkingGroupRewardPeriod: u32 = ForumWorkingGroupRewardPeriod + 10;
 pub const ContentWorkingGroupRewardPeriod: u32 = StorageWorkingGroupRewardPeriod + 10;
 pub const MembershipRewardPeriod: u32 = ContentWorkingGroupRewardPeriod + 10;
 pub const GatewayRewardPeriod: u32 = MembershipRewardPeriod + 10;
 pub const OperationsAlphaRewardPeriod: u32 = GatewayRewardPeriod + 10;
 pub const OperationsBetaRewardPeriod: u32 = OperationsAlphaRewardPeriod + 10;
 pub const OperationsGammaRewardPeriod: u32 = OperationsBetaRewardPeriod + 10;
 pub const DistributionRewardPeriod: u32 = OperationsGammaRewardPeriod + 10;

The effect of this will be cumulative, so the rewards will no longer happen in "cascase od 10 blocks apart" right after the first reward.

Let's say for Forum working group the first batch of rewards is paid at block x, and for Storage at x+10.
Then for Forum the second batch will be paid at block 2x, for Storage at 2*(x+10) = 2x + 20, for Content: 2*(x+20) = 2x + 40 and so on...
So the second time the rewards are paid, individual groups will already be 20 blocks apart, and the n-th time they are paid they will be 10 * n blocks aprart (and some point some groups will be days behind...)

bounty

 pub const MinWorkEntrantStake: Balance = compute_single_bloat_bond_with_cleanup(EntryMapItemByteSize, MaxWorkEntrantCleanupFee);

This will be possibly much lower than CandidateStake which is described as "lower bound for any stake binding", although I'm not sure if that by itself is a problem.

storage

pub DataObjectPerMegabyteFee get (fn data_object_per_mega_byte_fee) config(): BalanceOf<T> = (100*BYTES_IN_GB).saturating_div(cents!(1));

I don't understand this formula.
It basically says the fee per megabyte is (100 * 10^9) / (HAPI per $0.01), which is 10^11 / 1666666666 = 60 HAPI.
So it's just $0,0000000004 per MB, which is obviously too low (that's $0,0004 per TB of uploaded data!)

My suggestion is to make data size fee $0,003 / MB, I provided a rationale for this in the How much should the data size fee be? subsection above.

    pub const StorageBucketsPerBagValueConstraint: storage::StorageBucketsPerBagValueConstraint =
       storage::StorageBucketsPerBagValueConstraint {min: 3, max_min_diff: 10};

I recently changed this sematic to MinStorageBucketsPerBag and MaxStorageBucketsPerBag, as it's more clear and also makes it easier to define bounded collections based on those values.

So having that in mind, do we want MaxStorageBucketsPerBag to be 13 or 10?

proposals_engine

// Its difficult to select one uniform value here, and since this fee actually
// is collected regardless of how much was actually staked, it should be kept low for now.
pub const ProposalCancellationFee: Balance = dollars!(1);

I think the only real risk of this value being to low is that since there can be only some number of active proposals at any given time, this means someone
can quite cheaply create new proposals and keep canceling them before they get slashed, possibly blocking some important proposal from being submitted.

Initially I thoght 1 USD may be too low to discourage this kind of attack, but on the second thought it's probably just enough, given how easily it can be mitigated by a responsive council.

proposals_codex

// We are keeping this purposefully small for now becuase we only have one funding request
// proposal, and if it was to allow big spends, then we would need different parameter settings.
// This can be introduced later
pub const FundingRequestProposalMaxAmount: Balance = dollars!(10_000)
pub const FundingRequestProposalMaxAccounts: u32 = 20;

Please note that FundingRequestProposalMaxAmount is actualy per account, so with FundingRequestProposalMaxAccounts we get $200k per proposal.

@bedeho
Copy link
Member Author

bedeho commented Sep 27, 2022

Replying to first part now, just to unblock you and others. Will continue with next in the afternoon tomorrow or Thursday

I think the assumption that primary payoff for an attacker is hedonic or reputational is extremely optimistic.
As soon as there is any reasonibly-sized market for JOY, an attacker would be able to short-sell JOY on leverage for profit,
which I think would be the main motivation.

  • Great analysis, and I really liked the creative way of finding empirical estimates for price impact of attacks.
  • I agree short selling is a coneptual issue, it is recognized in handbook as effectively undercutting the security guarantees provided by staking in general.
  • I think there are some reasons to be suspect such an attack not being particularly attractive even if all these numbers are right
    • You need a shorting venue, let alone a trading venue, and $JOY has neither. For reference, a minority of exchanges have shorting, Kraken is one, but even only allows 46 as shortable (https://www.kraken.com/features/margin-trading) out of 206 assets, and this still includes extensive restrictions on who can use this functionality, and undoubtedly lots of supervision and discretion over operation of the trading markets.
    • Most of the listed incidents are full on halts or major hacks which divert funds, not simple congestion of most extrinsics. Market impact is likely to be lower I would presume.
    • The return is not that great, you are risking $500K to earn $32K-($10K + fees)=$22K, i.e. 4.4% return if there are 0 fees. This seems like a very bad bet on risk adjusted basis under most reasonable risk assumptions associated with shorting cryptos. This probably is the missing piece which fully account for no shorting based attack has been pulled off, at least as known publicly, not even at higher level governance layers which often have critical dials in many systems.
    • As a reference it would be interesting to check cost of 24 hour congestion on other chains with profile similar to what we are going for, but much larger caps, and thus more eyes on them. I'm thinking of chains which target low value transactional uses of block space. It would be interesting to see how expensive it would be to fill those blocks for 24 hours? Say BCH, DeSo, BSc and Flow? Remember that their block times are very different from ours!
    • Even if this attack was fully executed once, how many times could this be done? Perhaps only once, perhaps a few times? It may still be that the benefit of low fees is on net more important than avoiding such incidents. Worth keeping in mind at least.

In the end I would prefer to not go from $10K to $30K per day, as low fees are quite important for us, and short term (say next 1-2 years) risk of this issue seems low, but I'm not wedded to it!

Therefore if we want the network to be suistainable, I think it's inevitible to give some percentage of transaction fees to validators and adjust the parameters in such a way, that 1 day of full blocks equals to at least $5,184 of total validators' earnings (which would make their operation sustainable for at least a year).

  • First off, I think its great to highlight just how much space this is per day on this, oh my!
  • Second, I'm not sure I understood why it was that normal validator payments was not feasible to cover this $5,184 / year expensive? at 3%, with stated cap, this is $1.8M, out of which all validators will set comissions to at least cover their costs? (I don't think there is some sort of prisoners dilemma in this pricing)
  • Third, I think what is missing from this analysis is that, if blocks are indeed this full year long, then the system is getting very heavy utilization. This means that the value of the system overall is likely much higher. This means simply paying validators with inflation will be much more feasible. This does highlight that we really need to make sure we get the most value out of that block space, meaning that things like compression of metadata, and offloading certain parts of what is currently on chain to off-chain Ceramic like systems may be advisable.
  • Fourth, and most importantly. The worst possible outcome is to require individual actors to pay the costs they cause in the system, up front, when they cause large positive spillovers internally in the system, as creators do and users do. Since the network captures alot of this value, the network should fund a lot of this activity, and the only role for fee is to deterr attacks, which is an orthogonal concern to the financial viability of the validators. This is why, as an example, YouTube never would have asked creators to pay at any point through their highly unprofitable growth journey, despite very substantial costs.

Margin We need to take into account some margin, as our calculations were based on certain assumptions, like:

I agree the market cap one is by far the riskiest assumptin here, both ways. There is no way outside of actually updating this fee with a runtime upgrade proposal when one sees market cap stabilize at some new order of magnitude very different to what was assumed, over some sustained period of time.

Summary: My proposal

Please double check and make sure I'm summarising your proposal correctly in relatin to the original one!!

  • Byte fee: original proposal was 0.1milicent, you suggest 2x? I think this is fine.
  • Weight fee: orignal proposal was CENTS/[100*ExtrinsicBaseWeight]=CENTS/[100*85_795 * WEIGHT_PER_NANOS]=CENTS/[100*85_795 * 1000]= CENTS/8579500000≈CENTS/9_000_000_000=0.111... nanocents, you are suggeesting roughly 6x? This seems quite steep to me, its 4x more than needed to protect against already quite pressimistic $30K per day willingness to pay. How about 2x?
  • Percentage of fees going to validators: I was suggesting 0%, you are suggesting 20%. As margin concerns are orthogonal to rationale for validator compensation structure, which I still believe should not fall on users due to spillover argument, I still believe this should be 0%.

Why is it ok for the byte fee to be 50x less than Polkadot's/Kusama's if weight fee is just 2x less?

Realisically, I think byte fee is most important for our chain, as most content related actions may have a lot of metadat, but not that complex compute? I guess perhaps there is some heavy bucket related stuff when creating bags during channel creation that may conflict with that.

Proposal parameters

Well spotted, I think the issue here is most fundamentally that we have a system where it is too easy to miss how absentions are different, so much so that I made a mistake myself in prsuming that approval_threshold_percentage covered everyone, when it in fact did not.

There is already an issue about changing this here, originally planned for post-launch, but perhaps we just fix this now?

#4298

If we do this, are you fine with this specific parameter issue?

Council & Elections & Proposals

I think reducing period length is fine, but then I think constitutionality must be increased. The reason for the increase is that detection of and coordinating against hostile proposals requires time, possibly a long time if the change in question is complex and it's not easily conveyed why its bad, or how bad it is. I think the cost of having risky things take time is something we should accept for certain existential proposal types. Just keep in mind how long it takes to even put together a fully tested and well reasoned runtime upgrade, it has lots of built in latency to begin with, same is likely the case for other very risky changes. Having a shorter council period is however still great, as it increases our effective security by having larger number of occasions for challenging. One thing that does concern me is however the monitoring cost: imagine there was an election ever 24 hours? That would be excellent in terms of being able to challenge bad proposals, but it may also badly deplete the capacity to monitor effectively, as it just becomes too costly.

How about

  • Announcing Period: 9 days
  • Voting Period: 3 days
  • Reveal Period: 3 days
  • Idle Period: 1 block (basically anything > 0)
  • All constitutionalities are 2x if they are already > 1.

**I don't understand why you want to keep, let alone reduce, constitutionality in concert with shorter council, given your security concern? **

Content, Membership, Forum, HR, Marketing ... Realistically I think there should be at least 5-10 curators being paid $1,000 - $2,000 a month + a lead with a salary of at least ~$5,000, therefore I suggest a $15,000 / month budget for the content group.

Sounds good.

Storage and distribution

Great analysis on the upload/distribution ratio, and also the very granular breakdown of resources.

My main issue with this analysis is that it assume one worker runs one node. I don't believe this is an efficient way to organise this activity. Having many dusins, let alone hundreds, of people radically increases costs of coordination, planning, monitoring, and reduces room for specialization. This is part of what I was trying to governance section.

Any idea of how many actual full time people would be needed if we assume

  • storage: 1 worker per replication factor, that way each backup is at least operationally redundant.
  • distributor: most reasoable approach seems to be to have family based split, as families are most likely to represent geographic areas to begin with, and its probably cheaper to organise work according to that, e.g. due to worker timezone matching and cloud provider relationship overhead. Its still not clear how many per family, but a super low number is where we shuold start I believe, so like 5 families, 1-2 worker per family + 1 lead or something like that. I don't believe these even would need to be full time equivalents in practice, the headcount is mostly from minimal operational redundancy.

Creators & creator payouts

I think we cannot really rely on paying based on consumption data during the first year, this requires better data collection and authentication on distributor layer, and likely Orion v2/gateway node to allow gateway level user accounts. So alternative will be to just pay some sort of tiered rate based on uploads which are considered worthy. I think your final number is OK I think I thought we better to allow the DAO to provide substantially better terms during bootstrapping phase when first creators are being enticed to show up to an empty party, so to speak.

What about cutting it down the middle? You decide.

How sustainable is this? ... Ideally we would need to burn more or less the same amount of JOY that we mint or just slightly less in order for the price of JOY to be sustainable over time.

The requirement of sustainability, which I take to basically mean covering costs, or profit=0 condition, is not really appropriate for the 1st year of a network system. I think YouTube itself is the best example. It is now 17 years old, and is thought to have around $300B - $500B in enterprise value, but it is still not actually clear whether its profitable after creator and infra costs, and if it is, this was very late in its story.
Likewise, Twitch is still not profitable, but valued at $15B.

Another reference point here is looking at quarterly revenue of various protocols, and here it is not even distinguished what portion of this capitalizes in the native asset vs. goes to service providers like liquidity providers or infra operators. Even when looking at the very depressed crypto capital markets, you can see that valuations of these assets, which is what determines their ability to fund expenses, is a very substantial multiple of these numbers, and many of these wil not be profitable, even Ethereum, despite EIP1559.

That leaves us with data size fee as our last hope to counter high inflation. Ideally they should cover at least ~$10,000 of burns per day.

So I think we should not do this, and as a general point, when we look at where value should be captured, it should be places where people are consuming, and not producing positive spillovers, hence charging up fees on NFT/CRT features for consumers for example.

The cost of storing AccountData

  • Thanks for fixing those calculations! I actually just presumed u128 was 128 bytes, it was not a fat finger mistake :D
  • I tried to pin down where ExistentialDeposit: Balance = currency::MILLICENTS actually came from, seeing it on Carthage, perhaps I took that as a Kusama value or something?
  • I don't agree that Polkadot existential deposit is right for us:
    • The fact that it was set to dust $1, and this limit got as high as $50 USD, and this was thought to be fine, so shows that Polkadot was not intended for low value uses in my opinion. This also in general my impression, its meant for parachain block processing and interop secured by relay chain, not things like we have in mind.
    • Kusama has been running for 3 years, securing a lot of value, and I don't believe the commmunity or devs would have just left a dangerous ED value when all it takes is an upgrade. It was meant as a value bearing chain for newest features, but I don't think that implies known problems would be disregarded.
    • A likely very substantial contributor to Ethereum state bloat is that it must store the contract code for each new deployment, which may be enormous (up to 24Kb, yet people still need to work around it), in state. We do not. Our heaviest objects of which normal usage patterns are likely to create large numbers are channels (and vids), and you get something like 380M channels on 144GB, which is way way bigger than YT even.
    • Statebloat is largely an unsolved problem in practice across the industry, I don't even think it's a well posed problem beyond the mere obsevation that when the state gets bigger, bad things can happen. So I don't think its a good tradeoff for us to sacrifice a wide range of costs in our chain being ramped up 10x or 100x for a very long term problem, we do not know the severity and relevance of for our system, and where current solutions don't actually solve the problem in a nuanced way. Reminder, Solana charges $0.00025 in tx fees at scale, which if they even only apply to balance transfers would be way cheaper than this.

For these reasons I think we shuold stick to a small value here, at the very least to start out with. I don't really have a solid way of distinguishing between the merit of the original price I proposed and say going 10x, precisely because the problem is not really well posed. For non-adversarial users, this cost should be 0, as it's likely to grow very slowly, and so only the concern for attackers is of relevance, where I have no good attack model in mind. I would therefore just keep it as is.

@Lezek123
Copy link
Contributor

Lezek123 commented Sep 28, 2022

Second, I'm not sure I understood why it was that normal validator payments was not feasible to cover this $5,184 / year expensive? at 3%, with stated cap, this is $1.8M, out of which all validators will set comissions to at least cover their costs? (I don't think there is some sort of prisoners dilemma in this pricing)

It's $5,184 / year per day, so if blocks all full the entire year, for example, validators' costs are increased by ~$1,9 mln in that year, Of course this is unrealistic within the first year, but over the long term for the general sustainability of the platform I think it's a good idea to give some percentage of fees to validators, as this is just a very simple mechanism to account for block space used. The actual number doesn't have to be 20% though, I think even something like 5% could work (in my estimation I took a very large margin and in the end decided to propose Polkadot's value, which was also the value we used to use before we disabled the "fee cut" due to a vesting pallet issue). I also think the actual number in the reward curve doesn't have to be 3% either, it can be lower, it's just about the mechanism, not about the exact numbers.

Third, I think what is missing from this analysis is that, if blocks are indeed this full year long, then the system is getting very heavy utilization. This means that the value of the system overall is likely much higher. This means simply paying validators with inflation will be much more feasible. This does highlight that we really need to make sure we get the most value out of that block space, meaning that things like compression of metadata, and offloading certain parts of what is currently on chain to off-chain Ceramic like systems may be advisable.

I agree that it may be the case that increasing value of the platform will by itself account for those costs, I think the transaction fee cut however just accounts for them in the most straightforward way: validators get some revenue that is proportional to the size of the blocks that they need to store. It seems like a very clean solution with no obvious downsides, other than that if we decide to leave reward "curve" at 3% and we reduce burning from tx fees by let's say 5% (which is a relatively very small proportion of all burning that happens) it may have a minimal impact on long-term JOY holders.

Fourth, and most importantly. The worst possible outcome is to require individual actors to pay the costs they cause in the system, up front, when they cause large positive spillovers internally in the system, as creators do and users do. Since the network captures alot of this value, the network should fund a lot of this activity, and the only role for fee is to deterr attacks, which is an orthogonal concern to the financial viability of the validators. This is why, as an example, YouTube never would have asked creators to pay at any point through their highly unprofitable growth journey, despite very substantial costs.

In this specific case we don't actually make anyone pay more, we just burn 95% of the transaction fee (which is required anyway to deter attacks) instead of 100%. So instead of all JOY holders being equally rewarded by the fact that a lot of transactions are happening (and therefore a lot of burns), we make the positive impact higher for validators, but very very slighly lower for other JOY holders (for example, 5% lower), which I think makes sense, as for validators more transactions = more costs, while holders don't need to worry about the chain size. But as I said, we can also slightly reduce validator's payments from the 3% inflation pool to counter the negative impact of less funds being burned through tx fees.

you are suggeesting roughly 6x? This seems quite steep to me, its 4x more than needed to protect against already quite pressimistic $30K per day willingness to pay. How about 2x?

I think x2 is ok (so ~0.2 nanocents / unit)

Percentage of fees going to validators: I was suggesting 0%, you are suggesting 20%. As margin concerns are orthogonal to rationale for validator compensation structure, which I still believe should not fall on users due to spillover argument, I still believe this should be 0%.

  • I think 20% may be too much and I don't think I have a convincing argument for exactly this number, except that "that's what Polkadot uses"
  • I still believe there should be some percentage that goes to validators. A number like 5% would have a very small impact on holders and it's a very clean mechanism to account for increasing costs of running a validator due to high transaction throughput.
  • If even 5% is a concern in terms of it's impact I'd put it like this: In my view it's just better to reduce the 3% inflation that goes to validators than to keep the transaction fee cut at 0%, because the payout from inflation is independent from the actual transaction throughput, or it's only partially dependent, while transaction fee cut is exactly dependent on transaction throughput. It's like we have 2 mechanisms: one to reward validators independently of their workload and the other one to reward them proportionally to their workload and we only rely on the first one while disregarding the second.
  • I'm going to make a final suggestion to go with 5%, but I guess 0 would still probably be fine for the first year (I guess it can always be adjusted if we find it does no longer work).

Realisically, I think byte fee is most important for our chain, as most content related actions may have a lot of metadat, but not that complex compute?

I was just pointing out that byte fee gets increasingly more important the more validators there are and the lower we want the requirements for running a validator to be and how that may explain why with our assumptions about 100 validators and no clear "max specs / validator" we ended up with a much lower number than Polkadot/Kusama.

#4298
If we do this, are you fine with this specific parameter issue?

I answered to the issue here: #4298 (comment).

I still think those parameters are not good however. Even if we don't have Abstain feature, it will be better to not vote at all than vote Reject in some cases (ie. when there are 3 CMs and there is already 1 Approval vote). If I were a CM I would think my Reject vote would always have the highest impact when it comes to rejecting a proposal (compared to any other vote or no vote at all).

I don't understand why you want to keep, let alone reduce, constitutionality in concert with shorter council, given your security concern?

My reasoning was like this:

  • Short periods limit the amount of damage that can be done by a bad actor during a single council term. If we have 21 days periods, someone can mess up for 21 days and there's nothing that can be done about this. With 7 days council period, someone can mess up for 7 days, but then their term will end and they will not be re-elected.
  • Constitutionality to me is just a security mechanism to prevent some elected set of council members (or in the end it could even be a single person w/ multiple memerships) from passing proposals behind the back of the broader community. It doesn't necessarily work less efficiently when the council terms are shorter (within some reasonable bounds), but I think it actually has a big side-effect cost when council terms are long, because it becomes an obstacle when a proposal needs to be passed quickly (again, within reasonable bounds, in this case "quickly" actually means within like 2-3 weeks)
  • A runtime upgrade is not by definition something that needs to be evaluated for at least 2 months. I can easily imagine multiple emergency situations where the system would benefit from a quicker runtime upgrade (like our Antioch release), this could be a minor change in the code that fixes a very impactful bug for example.
  • I think generally the process of evaluating whether a runtime upgrade makes sense will usually start way before the proposal is created and only after the community reaches some initial agreement, the proposal would be created. Any deviation from this policy would make the community suspicious enough to not re-elect council that tried to pass a questionable upgrade.

My main issue with this analysis is that it assume one worker runs one node. I don't believe this is an efficient way to organise this activity. Having many dusins, let alone hundreds, of people radically increases costs of coordination, planning, monitoring, and reduces room for specialization.
Any idea of how many actual full time people would be needed if we assume

  • storage: 1 worker per replication factor, that way each backup is at least operationally redundant.
  • distributor: most reasoable approach seems to be to have family based split, as families are most likely to represent geographic areas to begin with, and its probably cheaper to organise work according to that, e.g. due to worker timezone matching and cloud provider relationship overhead. Its still not clear how many per family, but a super low number is where we shuold start I believe, so like 5 families, 1-2 worker per family + 1 lead or something like that. I don't believe these even would need to be full time equivalents in practice, the headcount is mostly from minimal operational redundancy.

I think it is a good idea for workers to run more than 1 node, that's probably an idea I should've investigated more deeply in my review instead of just assuming 1-to-1 worker-to-node relationship.

It seems like it would be more efficient in terms of costs also:

  • With 10 distributor workers running ~10 nodes each I think we could reduce the worker payouts from $100k to perhaps ~$35k. It wouldn't be exactly linear, as obviously the overhead of running and maintaining 10 nodes is much bigger than in case of a single node, but it's also obviously easier for one person to manage 10 nodes than for 10 people to manage 10 nodes separately. So a budget of ~$80k / month for distribution group under those assumptions could work (instead of $145k)
  • For storage it would be a similar story. I assumed 30 worker with $1000 / month for each, with 5 workers running ~5 nodes each we could give each worker let's say $3000 / month and that would still reduce the total worker payouts twice, making storage monthly budget ~$45k / month (instead of $60k)

What about cutting it down the middle? You decide.

Given we can optimize storage/distribution budgets by reducing the number of workers I think we can increase it to ~$200,000 without affecting inflation too much. Whether it really needs to be that high is hard for me to tell. $500,000 on the other hand definitely sounds like a lot, considering creators also have other options to monetize (video nfts, within-content ads etc.), so I'm all for reducing it.

The requirement of sustainability, which I take to basically mean covering costs, or profit=0 condition, is not really appropriate for the 1st year of a network system.
...
So I think we should not do this, and as a general point, when we look at where value should be captured, it should be places where people are consuming, and not producing positive spillovers, hence charging up fees on NFT/CRT features for consumers for example.

I see your perspective. I agree I was overly concerned about inflation, 0% is probably not a number we should strive for. >10% seems high, but it's not the end of the world either and is definitely more approperiate at the beginning.

However, w.r.t. to the data fee, I think it should still be a number that will be at least high enough to incentivize a more optimized usage of our storage infrastructure and discourage spamming the storage system with 30 GB videos of white noise at almost no cost (for example).

@bedeho
Copy link
Member Author

bedeho commented Sep 29, 2022

Summary of Parameter meeting: 29th September

Today we had a meeting to discuss a subset of the current open topics, and bring some of the other group members up to speed on the topic for the purpose of their future reviews. We covered the following topics, which do not cover all current outstanding issues.

What should the fee_per_megabye be?

This was originally proposed to be 0, primarily to make it cheap for users to use the system, and because storage providers should have their costs covered through their recurring rewards, which also cover things like allocated but idle space, not yet consumed, but still costly. However, a point was made that this left the system open to a relatively easy attack where someone could exhaust bucket capacity, with a single upload, blocking usage of all bags in those buckets, until lead can correct issue. It would not even require the user to actually make the upload, which obviously would have to be very large. To prevent this, it was agreed that having a non-zero fee makes sense, so that filling up many TBs should involve a real loss, however, cost of a normal user should be kept very low.

**Final value should be suggested by @Lezek123 **

Should we give block producer > 0% of transaction fees?

Originally proposed to be 0%, hene all tx fees are burned, to resolve a vesting schedule leak attack where user could set leak x% with block producer and split the liquid proceeds (Is this still issue?). Lezek proposed to set it to 20% again, as under this would give the block producers a revenue stream proportional and greater to the cost of storing transactions when making certain assumptions about how many validators are involved. It should be noted that these casts are at any rate still more than covered by the staking revenue. My view is that, while its unlikely to matter much easier way, it just seems cleaner to go for 0%, builds a nice narrative about value capture, and entirely moves any concerns about lock interactions off the table.

Final value requires some input from other reviewers still.

Constitutionality and council period length

The original question was what to set as a council period, and what to set as constitutionality for the most sensitive proposals, such as runtime upgrades. Original period was 3 weeks and most sensitive proposals were constitutionality=4.

We spent quite a lot of time discussing the specific situation around emergency upgrades and discussed alternatives like

  • Allowing council to very quickly pause platform features to block side-effects of ongoing exploits: Emergency pause proposal #4172
  • Adding new stakeholder influence, e.g. from voters or extra committees, to influence proposals in some way.
  • Adding a new special kind of emergency runtime upgrade proposal that would have lower constitutionality and be protected with social consensus by the expectation that it was only to be used for emergency bugfixes:
  • Making sure to have a public process of review and testing before deployment.

We also discussed the problem of how the council can securely fix a problem without compromising on accountability, an issue has been created for future work: #4329

On the council periods @Lezek123 suggested total 1 week period length, and constitutionality of 3 as max. The worst attack is one where a bad council waits until last block of reveal period of first CP, getting to constitutionality=1, without the first election being informed about this risk. Then they are free to pass it whenever they want, getting to constitutionality=2 during that next full CP. If they are able to win the election in that period, then they can pass it in the first block in the next idle period. This means effectively from first moment proposal is known until it passes only one council period has elapsed, and two elections have elapsed while proposal was visible.

Opinion: I think this is too few effective elections (3-1=2) and too little time (1 cp = 1 week), to prevent a bad attack. At the same time I think we gain relatively little in emergency responsiveness, because that fundamentally just seems to require some very different tools to work well. Most non-adversarial upgrades also will not be so urgent, so we have limited upside there. My v2 proposal was council period 9+3+3+1block=15 days and constitutionality 2x4=8. I'm fine to reduce that, but suggestion above is too risky.

Final value requires some input from other reviewers still.

proposal parameters: quorum & threshold

Here we discussed what to do with abstentions in order to make sure system worked in an intuitive way, e.g. that abstention was not more powerful than rejecting in preventing execution, and also avoiding opaque strategic behaviour. We concluded

  1. remove abstention special treatment, explained here: Include abstention votes #4298
  2. require thresholds to be no less than quorum for all proposals, otherwise non-voting is more powerful than rejecting.

@mochet
Copy link

mochet commented Sep 30, 2022

I agree with @mochet that the number seems low. It makes council a very centralized body, very dependent on a single actor.
I don't think I have a good-enough argument against this particular number, but if we were to go with it, I think the council periods should be shorter, because in that case bad actors can be replaced quickly and will have a more limited impact (for example, they wouldn't be able to stale key proposals for weeks). I expand on this thought further below.

I agree with this. I think in general the early days of launch period will be hectic as although we do have an enthusiastic community we will still be opening up paying roles to probably a large body of completely unknown people with zero reputation.
We can count on a certain amount of stability from known actors within the current community but we've never had any strong financial incentive for outside actors to participate and having long elections with such a small size seems like it would create an optimal situation for deadlock or strong control of the council and subsequently all working groups.

I think I am heavily in favor of retaining the current 7 day council term period as a counter to this as it has demonstrated repeatedly through all testnets to result in the most activity. At a fundamental level the council's performance (and the DAO's "health" basically) is very much fulfilled by reliabilty with achieving accurate and timely reporting and if we start with the same highly consistent expectations it makes it much easier for the testnet to transition to mainnet. Reports have basically been the cornerstone of our DAO's health and if in the first term or two they are not being done properly and consistently then it should ring alarm bells... but if it takes 4-6 weeks to realize this then that is a problem.

(You can of course require WGs to report more regularly than council term length, however, this means the two are no longer tied together which makes it quite a bit more difficult to effectively measure the council's performance and guage their understanding and decision making process. It makes a lot more sense for the rhythm of the WGs and the council to be tied together from the beginning and then consider extending council terms once things are running fluidly. This also makes setting initial budgets a lot simpler rather than having them being decided on sporadically and in a disjointed fashion.)

With a long cycle, such as 2-3 weeks, I expect the time required for initial "learning behaviour" to not only be significant, but also for any required corrective behavior to take a long period of time. What also needs to be factored into this all is that the creation of proposals to announce/hire/fire/slash leads and also the proposals involved in monitoring reports and work consistency is significant.

It may seem that it would be easy to replace an AFK, malicious or underperforming lead rapidly, but in reality this operation can take as long as 5-7 days (at minimum!).

Just "booting up" the DAO on mainnet will require somewhere in the region of 20+ approved proposals before a single lead is even hired (depending on whether all WGs are treated equally). This could take 2 or more weeks.

Every single time we have restarted a testnet it has taken quite an amount of time to get things going at a basic capacity. Almost every single time we've replaced a lead it has taken longer than a week to get a replacement. Especially as the token value will be unknown, and since many actors may have zero reputation this time delay should be accounted for.

In light of the above I would suggest:

  • a council size of no less than 5-6
  • a council cycle length of no more than 7-9 days
    The only mildly strong argument against extremely short council periods is that the workload is immense and it isn't always possible to achieve much in such a short time span, but the council can solve this by utilizing a WG to hire workers specifically to organize the paperwork side of things.

On every single testnet, it has been critical to establish "tempo" (the regular filing of proposals and following of processes/deadlines) and it usually takes quite an amount of time before it fully works--having a very long council cycle would allow for too significant of a margin of error into things and it would be better to start with rapid fire council terms.

On top of all of the above, when measuring the metrics of the platform as a whole, I think it will be far more effective, especially early on to measure the council and WG's performance very frequently (weekly, basically). With a council period of 2-3 weeks the measurements can still be done but they are far less effective in the crucial starting stages of the DAO.


I think it is a very good idea to browse through past proposals starting from the last page to get an idea of how long it took to spin up things the last time the testnet relaunched: https://dao.joystream.org/#/proposals/past -- and this performance was done with quite an element of planning/coordination on JSG's side as well as the incentives v3 framework)
Carthage will be a good test on how rapidly the council/community can coordinate

Also keep in mind that with

  • incentives v3 going away
  • 2 new WGs being added
  • real world market conditions
  • vesting schedules
  • real world validator control
    That even if the performance on the Olympia testnet seems optimal in any way, there are going to be a huge variety of outside factors that will impact timing and pace

@bedeho
Copy link
Member Author

bedeho commented Oct 2, 2022

Feedback on Feedback

This is my reply to #4300 (comment). I already partially replied here

#4300 (comment)

Calculating max cleanup inclusion fee

So first of all - it doesn't have to be constant in order for us to be able to use its result as a config parameter, for example we can do:

Right, so value has to be constant, as per this, but you are saying it does not need to be a Rust constant, is that your point?

If so, then yes it this may be better, even though it introduces the risk that someone inadvertently forgets to have this be constant. What I wonder about though is if any of these lower bounds on reclaim fees will actually ever be relevant, perhaps the primary cost will always be much higher in pratice. This is always the case in Polkadot, hence they dont even try to have some sort of code level guard for this. If this is the case for us also, then the complexity of introducing all of this may not be worth it, and certainly not in terms of timeline.

How about this: we first stick ot the simple approach right now, look at how the fee bounds relate to the primary bloat bonds, and if they are same order of magnitude, then we consider introducing this smarter more automated way?

Let me know what you think.

pallet_multisig and calculating storage entry size

Great breakdown here, so to summarise

  1. need to use more careful encoding assumptions for key types
  2. need to add pallet prefix
  3. need to add storage prefix

I believe that these are however not accounted for in the Polkadot/Kusama estimations used for bloat bonds though, correct? If so, is there no chance that these (1-3) contributions to the size are some how qualitatively different? My baseline assumption is no, but also strange why they would consistently get this wrong.

This is a common mistake in this proposal, so I won't point it out in all the other places where it occurs, unless it results in some broken assumptions.

:thumbs_up:

Staking:

With min_inflation exactly the same as max_inflation don't we effectively end up with a flat line?
It seems like other parameters (like ideal_stake) have no meaning in this case.

Yes, seems like a fat finger here. So I was going by https://research.web3.foundation/en/latest/polkadot/overview/2-token-economics.html#npos-payments-and-inflation, Where

  • min_inflation, as indicated by the docs, is equivalent to I_0, i.e. the total inflation when there is no no staking. In Polkadot it is 2.5%, why we should just scale that down by 1/3, as we are doing that to the ideal total inflation anyway
  • max_inflation, as indicated by the docs, is equivalent to I(x_ideal), i.e. the total inflation at the ideal staking rate. This is where it actually should be 3%.

I did not see any input on whether 3% overall stake inflation, or then 6% rate of return on all stake (since it is assumed to be at x=50%). I am going to ask the validators we are speaking with about this, but as always my presumption would be that we really have a very low security risk in the immediate next phase.

If a bad actor ends up in council with just minimum stake and the proposed parameters of 21 days council term and a $10,000 reward a month, they're almost doubling their stake during the term, no matter what they do or don't do during said term.

Fair enough, I am not totally sure that this is a problem directly? One may say $10K is just too little, it's effectively the cutoff for any FM induction so far, but the fact that you have a high ROI is not per say a problem for the system. I don't think you'd actually make it with that amount though, competition ensures that. What would be sad would be if people are unable to get in, despite being very competent and visionary, but they just don't have the capital, and markets for borrowing $JOY are not functional.

I guess my question is: what problem are we solving for with this stake? Here are some candidates

  1. "Incentive compatibility work in the interest of DAO because the interest of optimising your MinCandidateStake"": I think this rationale has very limited power for all ranges of values of this parameter which are not very big holders. Now we could just embrace a "whale" oriented council for now, so voters cant even elect someone who is not very very big. This would certainly be more safe, but could crowd out very qualified people who show up down the line.
  2. "Keep out the knuckleheads": So basically just a low effort filter to just not make elections noisy and avoid someone idiotic, but not highly motivated or financed, from slipping in. This has some utility I guess, but not sure how much. Perhaps the $10K value already does this?

So what do you want to do here?

next_budget_refill ... It's not a good idea to set this value to 0.

Oh, I thought blocks were numbered from 0, in that case yes lets go with 1, I wanted it to run on the first block basically.

members

Yeah DefaultInitialInvitationBalance was def. way too high here, not sure where our fees are going to land in the end, but if the numbers are presented are any indication, then $5 seems to high. I also think DefaultMembershipPrice is ok to be decently high, because if you are actually buying a membership, then you most likely have already obtained some sizeable balance to begin with, e.g. via purchase. I am happy with your numbers here.

working_group

I think this especially applies to DistributionWorkingGroup, unless we want each member to run at least 3-4 nodes?

As I alluded to prior, I think this def. should be the model, not hundreds of workers, however, this is not per say an argument for keeping parameter low. The primary argument for keeping it low is to make sure that, if somehow every lead unwittingly maxes their worker pipeline, that they don't exhaust, or even exceed, the block weight limit. Alternatively, just to crowd out lots of other transactions..

The effect of this will be cumulative, so the rewards will no longer happen in "cascase od 10 blocks apart" right after the first reward.

Right yes, that cascade effect is not durable, but is there an actual problem here? Each group gets paid at a fixed reasonably frequent interval, and noone group pays out on the same block. Does the nth payout gap matter directly?

storage ... So it's just $0,0000000004 per MB, which is obviously too low (that's $0,0004 per TB of uploaded data!)

So as we have covered, this should not be considered a revenue driver at present, only real case is to deter trivial congestion of buckets. That type of attack should be substantially less attractive than denial of service block congestion attack, as you are doing it one bucket at a time, effects are local to only those bags, and actions are reversible easily, even perhaps automated. Compounding this is that we don't know how big buckets will be, so it's really hard to say anything much. I would say something like $50/TB is unlikely to bother users much, and brings us well above the free level I originally proposed. This is adjustable after all, so being wrong is not so costly.

So having that in mind, do we want MaxStorageBucketsPerBag to be 13 or 10?

From the old values it would be 13.

proposals_engine ... I think the only real risk of this value being to low is that since there can be only some number of active proposals at any given time, this means someone

Yes I agree, but it just seems fundamentally wrong to allow slashing more than what someone staked, which currently is the case due to #4287, unless all proposals have their stake raised above this rejection fee, which then has the secondary effect of blocking newcomers from making proposals. So either

a) we accept this, perhaps its just very unlikely that newcomers can make sensible proposals?
b) fix the bad slashing policy and raise this value, while keeping staking requirement low for certain proposals
c) just raise the value, accept that people will in some cases possibly be slashed more than they staked.
d) just leave everything as is, hope $1 USD is enough or that problem is not as bad as we think.

Pick whatever you want.

proposals_codex .. Please note that FundingRequestProposalMaxAmount is actualy per account, so with FundingRequestProposalMaxAccounts we get $200k per proposal.

Well spotted! Feels like we should change the meaning of this limit, its bad. The change is already covered by this much broader enhancement #3355, but seems severe enough that we should fix now. Added issue here: #4333

@bedeho
Copy link
Member Author

bedeho commented Oct 2, 2022

Feedback on 2nd Feedback

This is intended as feedback to #4300 (comment).

It's $5,184 / year per day, so if blocks all full the entire year, for example, validators' costs are increased by ~$1,9 mln in that year, Of course this is unrealistic within the first year, but over the long term for the general sustainability of the platform I think it's a good idea to give some percentage of fees to validators, as this is just a very simple mechanism to account for block space used. The actual number doesn't have to be 20% though, I think even something like 5% could work (in my estimation I took a very large margin and in the end decided to propose Polkadot's value, which was also the value we used to use before we disabled the "fee cut" due to a vesting pallet issue). I also think the actual number in the reward curve doesn't have to be 3% either, it can be lower, it's just about the mechanism, not about the exact numbers.

In this specific case we don't actually make anyone pay more, we just burn 95% of the transaction fee (which is required anyway to deter attacks) instead of 100%. So instead of all JOY holders being equally rewarded by the fact that a lot of transactions are happening (and therefore a lot of burns), we make the positive impact higher for validators, but very very slighly lower for other JOY holders (for example, 5% lower), which I think makes sense, as for validators more transactions = more costs, while holders don't need to worry about the chain size. But as I said, we can also slightly reduce validator's payments from the 3% inflation pool to counter the negative impact of less funds being burned through tx fees.

So I summarised our meeting discussion on this here #4300 (comment) under Should we give block producer > 0% of transaction fees?.

Just to summarise my POV:

  1. I believe we should focus on setting transactional resource pricing narrowly in terms of making attacks unattractive.
  2. In terms of economic efficiency, of financing for validation costs is not people wanting to use transactional resources, it's the owners of the system overall.
  3. Even if we say that it's nicer, or more automated perhaps, to use fees than adjusting staking curve, I don't believe it is the right model here, and I'm also not entirely sure its that much worse to just update staking payoffs over time, as this adjustments don't need to happen very frequently.
  4. I take your point that we are not actually raising the cost for end users, just redirecting the flow, but in a sense this is just a foregone opportunity to reduce the cost for end users, if you will.

In the end, I am fine with going with 5% you are recommending , so long as we are sure the vesting leak issue is not at play.

I still think those parameters are not good however. Even if we don't have Abstain feature, it will be better to not vote at all than vote Reject in some cases (ie. when there are 3 CMs and there is already 1 Approval vote). If I were a CM I would think my Reject vote would always have the highest impact when it comes to rejecting a proposal (compared to any other vote or no vote at all).

I believe the conclusion to have threshold > quorum fixes this now?

My reasoning was like this:

I guess these have been discussed already.

It seems like it would be more efficient in terms of costs also:

Sounds good.

Given we can optimize storage/distribution budgets by reducing the number of workers I think we can increase it to ~$200,000 without affecting inflation too much. Whether it really needs to be that high is hard for me to tell. $500,000 on the other hand definitely sounds like a lot, considering creators also have other options to monetize (video nfts, within-content ads etc.), so I'm all for reducing it.

Sounds good.

@Lezek123
Copy link
Contributor

Lezek123 commented Oct 3, 2022

I tried to pin down where ExistentialDeposit: Balance = currency::MILLICENTS actually came from, seeing it on Carthage, perhaps I took that as a Kusama value or something?\

Kusama's value is 1 cent, which is x1000 higher

I don't agree that Polkadot existential deposit is right for us

I wasn't suggesting to use the same value, just the same state size limit that it leads to. This means in our case (ie. assuming 60 mln market cap) it's just a few cents, not 1 dollar.

...
For these reasons I think we shuold stick to a small value here, at the very least to start out with. I don't really have a solid way of distinguishing between the merit of the original price I proposed and say going 10x, precisely because the problem is not really well posed. For non-adversarial users, this cost should be 0, as it's likely to grow very slowly, and so only the concern for attackers is of relevance, where I have no good attack model in mind. I would therefore just keep it as is.

I think those are good arguments to keep it small, possibly make it smaller than my suggestion of 2 cents, however 1 milicent is an extremely low value. It's even lower than tx length fee, which we agreed to set to 0.2 milicents / byte. With 1 milicent existential deposit we get a bloat bond fee of just 0.007 milicents / byte.

Without good benchmarks it's hard to tell what should be the correct value here, but if we were to go with anything substantially lower than 1 milicent / byte then it doesn't really matter, because in pretty much all cases the cleanup tx fee would end up being higher than the bloat bond calculated this way.

Perhaps going with 1 milicent / byte makes sense then. In that case forum thread bloat bond may be the only one with max. inclusion fee close to 1 milicent * stored_entry_size, so this number won't have any effect on the price of channel / video creation. At the same time it would be high enough to make any sense at all and will prevent the state from ever reaching > 6 TB (although as pointed out in your comment about the number of channels, it's very unlikely it will ever get even close to this number).

So how about this:

  • ExistentialDeposit: 144 milicents (1 milicent / byte, 14x lower than my original proposal, 7x lower than Kusama's hardcoded value, about the same as Kusama's actual value due to current price)
  • Min. channel bloat bond: 376 milicents (but since deleteChannel inclusion fee is ~2000 milicents it doesn't really matter)
  • Min. video bloat bond: 310 milicents (but since deleteVideo inclusion fee is ~700 milicents it doesn't really matter)
  • Min. thread bloat bond: 152 milicents (since deleteThread inclusion fee is ~190 milicents it doesn't matter currently, however it may if we were to reduce the fees slightly)

Right, so value has to be constant, as per this, but you are saying it does not need to be a Rust constant, is that your point?

Yes

What I wonder about though is if any of these lower bounds on reclaim fees will actually ever be relevant, perhaps the primary cost will always be much higher in pratice.

Based on the above suggestion about ExistentialDepsoit, it seems like the inclusion fees will always be relevant

How about this: we first stick ot the simple approach right now, look at how the fee bounds relate to the primary bloat bonds, and if they are same order of magnitude, then we consider introducing this smarter more automated way?

They are definitely in the same order of magnitude, which suggests we should introduce the min. cleanup tx inclusion fee calculation mechanism.

I believe that these are however not accounted for in the Polkadot/Kusama estimations used for bloat bonds though, correct? If so, is there no chance that these (1-3) contributions to the size are some how qualitatively different? My baseline assumption is no, but also strange why they would consistently get this wrong.

Polkadot's / Kusama's bloat bonds model is very different. First of all, on those chains ExistentialDeposit seems like something completely separate from any other bloat bonds (which are calculated using the deposit function).

Then there is the deposit function which looks like this:

	pub const fn deposit(items: u32, bytes: u32) -> Balance {
		items as Balance * 20 * DOLLARS + (bytes as Balance) * 100 * MILLICENTS
	}

And here the cost of storing any new item is 20,000x higher than storing 1 byte of data, so it seems to me that they can pretty much ignore the actual size of the key, as whether it's 0 or 128 bytes has negligible effect on the price of storing a new item and it can be already considered to be "factored in" the per item fee (I'm not saying that's why they are not including it, but it may be a potential reason)

The deposit function is also used to calculate the deposit required when adding new items to already stored collections for example, in which case items is set to 0. In this case the storage key size doesn't matter, as the entity is already in storage (it's just that its size is getting increased).

Perhaps this is something we should consider also, ie. either we set the bloat bond for entities like channel based on the max. size of a channel object w/ all collections fully populated (since all of them have bounds) or we charge additional bloat bonds when new items are added to collections like channel.collaborators / channel.dataObjects etc. The 2nd approach seems much more complex, but I think the 1st one is reasonable with a fee like 1 milicent / byte (as proposed before), since channel's max_encoded_len() is 865, which means the bloat bond would still be lower than cleanup tx inclusion fee, however the state-size accounting function will be much safer.

If a bad actor ends up in council with just minimum stake and the proposed parameters of 21 days council term and a $10,000 reward a month, they're almost doubling their stake during the term, no matter what they do or don't do during said term.

(...)
So what do you want to do here?

I don't think $10,000 is a problem in itself, I think it just makes sense for 1 council term payout to be substantially lower than min. candidate stake, now that we don't have any way to "punish" a bad council member.

Provided that we're lowering the council term, I think $10,000 is ok.

Right yes, that cascade effect is not durable, but is there an actual problem here? Each group gets paid at a fixed reasonably frequent interval, and noone group pays out on the same block. Does the nth payout gap matter directly?

I think it matters, because the current design doesn't even ensure that different groups don't get paid in the same block, because if the payment for group A happens every 14,400 blocks and for group B every 14,410 blocks, then at some point (at block 207,504,000 for example or any block which is divisible by both 14,400 and 14,410) the payments for those 2 groups happen in the same block.

I think it's a bad design, although it probably wouldn't cause anything critical to happen.

The correct design would be to make the reward periods equal, but also make the first reward block different for each group (in that case the initial gap between groups would never increase)

I would say something like $50/TB is unlikely to bother users much, and brings us well above the free level I originally proposed.

Great, I was going to suggest a similar value in the end, so I think $50 / TB, which is 5 milicents / MB is good.

Yes I agree, but it just seems fundamentally wrong to allow slashing more than what someone staked, which currently is the case due to #4287, unless all proposals have their stake raised above this rejection fee, which then has the secondary effect of blocking newcomers from making proposals. So either

a) we accept this, perhaps its just very unlikely that newcomers can make sensible proposals?
b) fix the bad slashing policy and raise this value, while keeping staking requirement low for certain proposals
c) just raise the value, accept that people will in some cases possibly be slashed more than they staked.
d) just leave everything as is, hope $1 USD is enough or that problem is not as bad as we think.

Pick whatever you want.

I think my previous conclusion was that $1 is ok, as the only attack scenario I could come up with can be easily deterred by the council and even with $1 fee the attacker would quickly run out of funds, possibly without achieving anything at all.

In the end, I am fine with going with 5% you are recommending , so long as we are sure the vesting leak issue is not at play.

I'm sure it's no longer an issue

I believe the conclusion to have threshold > quorum fixes this now?

Yes

Opinion: I think this is too few effective elections (3-1=2) and too little time (1 cp = 1 week), to prevent a bad attack. At the same time I think we gain relatively little in emergency responsiveness, because that fundamentally just seems to require some very different tools to work well. Most non-adversarial upgrades also will not be so urgent, so we have limited upside there. My v2 proposal was council period 9+3+3+1block=15 days and constitutionality 2x4=8. I'm fine to reduce that, but suggestion above is too risky.

From my POV council periods of 15 days and constitutionality of 8 is even worse than the intial proposal of 21 days / constitutionality = 4, because this means the runtime upgrade proposal now takes even longer to execute. In the best case scenario it's 3-4 months(!), and this is assuming 8 councils in a row are in perfect 100% agreement. I think that would almost defeat the utility of runtime upgrades for any purpose other than a big-release scenario, where there is some completely new functionality introduced, but nothing is really time-sensitive. In more time-sensitive situations I would imagine people rather moving to a fork of Joystream than waiting 4 months for a proposal which may or may not execute.

I think it's especially bad because we'll be a new chain and I can see a need for many upgrades to be executed relatively quickly in the initial stages of mainnet (because of bad assumptions w.r.t. parameters / bugs that may be only discovered on a larger scale / functionality that will definitely need to be added etc.), but with values as constraining as those this would become pretty much impossible.

My final suggestion would and the highest values I can imagine working would be:

  • Council term: 15 days (as suggested)
  • Runtime upgrade constitutionality: 4

@bedeho
Copy link
Member Author

bedeho commented Oct 3, 2022

So how about this:

Can you just list the tx cost values you get for some key transactions with these numbers? like you did last time? fine to just dump CLI screenshots here.

They are definitely in the same order of magnitude, which suggests we should introduce the min. cleanup tx inclusion fee calculation mechanism.

Ok.

How can we test correctness? for example adding some sort of check in integration tests that checks wheather cleanup fees are infact less than state bloat bonds?

And here the cost of storing any new item is 20,000x higher than storing 1 byte of data, so it seems to me that they can pretty much ignore the actual size of the key, as whether it's 0 or 128 bytes has negligible effect on the price of storing a new item and it can be already considered to be "factored in" the per item fee (I'm not saying that's why they are not including it, but it may be a potential reason)

Fair enough.

Perhaps this is something we should consider also, ie. either we set the bloat bond for entities like channel based on the max. size of a channel object w/ all collections fully populated (since all of them have bounds) or we charge additional bloat bonds when new items are added to collections like channel.collaborators / channel.dataObjects etc. The 2nd approach seems much more complex, but I think the 1st one is reasonable with a fee like 1 milicent / byte (as proposed before), since channel's max_encoded_len() is 865, which means the bloat bond would still be lower than cleanup tx inclusion fee, however the state-size accounting function will be much safer.

I agree, first version is safer, but it does make me slightly nervous how we maintain this well. Any time anyone changes the Channel representation or bounds, they must somehow not forget to update lower bound on future bloat bonds. I think there is so much uncertainity in the assumptions going into these bonds anyway, perhaps its fine to just leave it be and not add any more complexity.

Council term: 15 days (as suggested)
Runtime upgrade constitutionality: 4

Sounds good!

@Lezek123
Copy link
Contributor

Lezek123 commented Oct 3, 2022

I agree, first version is safer, but it does make me slightly nervous how we maintain this well.

I think we can take advantage of max_encoded_len now that all collections are bounded, ie.:

    pub PostDeposit: Balance = (key_size + forum::PostOf::<Runtime>::max_encoded_len()) * price_per_byte;

@adovrodion
Copy link

In the current council, the period is 7 days. In the current reality, the council is immersed in reporting; councils have little time to think about any long-term priorities. A longer period would give the council more time to do this.

@mnaamani
Copy link
Member

mnaamani commented Oct 5, 2022

Suggested updates to parameters related to validator staking, and elections.

Staking pallet

GenesisConfig

bin/node/src/chain_spec/mod.rs

history_depth

Proposal: 120

Number of eras. Essentially how far back unclaimed payouts are stored.
HistoryDepthOnEmpty = 84 is the default.
Both Polkadot and Kusama have this set to 84. Era lengths in Polkadot = 1 day, in Kusame = 6h
So it is not the same length in terms of time. We will pick similar era length as Kusama.
Proposal: 120 (=> 30 days = 6h * 4 * 30) although this will use more storage than default.

validator_count

(unchanged) This will be initially set to initial_authorities.len() - sudo will increase gradually, and eventually
the council will decide. A reasonable value to reach is 100 at some point.

minimum_validator_count

If an election would ever produce a validator set smaller than this value, the validator set is unchanged.
This is referred to "emergency mode".
Currently it is set to initial_authorities.len(). This is okay for our controlled staging and test networks.
Proposal: initial_authorities.len().min(4).
4 is the value used on Polkadot and Kusama. ".min(4)" allows the smaller initial network to not be in emergency mode.

invulnerables

This a vector of authorities and will be set to the initial validators at genesis.
This set of validators may never be slashed or forcibly kicked.
We should set this to an empty vector with staking::set_invulnerables() sudo once initial Validators are
decomissioned.

min_nominator_bond

Proposal: $100 - This should be relatively low barrier to entry.
Keeping in mind there is a MaxNominatorCount (20,000) and MaxNominations (16)
In worst case scenario (all nominators nominate the bare minimum) the total
backing stake possible is 2,000,000 * 16 = $32 Million.

min_validator_bond

Proposal: ~ 6 x the monthly cost of running a reference hardware machine ~ $400/month
so ~ $2500. With 100 validators and no nominators the minimum value at stake would be
$2,500,000.

Side note: Is the reward curve tuned to reward validators sufficient amount
given the estimated monthly cost above?

max_validator_count

We are currently setting this to None. Meaning no limit. This is not a good idea. There is risk of
too many stakers validating (waiting), which could consume a lot of storage and possibly cause issues with
elections?
Polkadot and Kusama currently have this set to Some(4x the validator count). If we expect the network to grow
to have 100 validators,
Proposal: Some(400).

max_nominator_count

Same argument as max_validator_count, should be Some(n)
Polkadot = 50,000, KSM = 20,000
Proposal: Some(20,000)

Proposals

  • we currently don't have council proposals to update many of these values. We should probably
    introduce a new proposal in future that calls Staking::set_staking_configs() to be able to update multiple
    staking related parameters.
  • the proposal named SetMaxValidatorCount should be renamed to avoid confusion. It correctly sets ValidatorCount.
    So technically it was always badly named.

Initialization Constants

Some constants used only at genesis build config that should be revised.

INITIAL_VALIDATOR_BOND

This will be the initial amount bonded by all initial authorities.
Proposal: 4x the min_validator_bond
With initial 9 validators total bonded would be $90,000

Runtime Constants

in runtime/src/lib.rs and runtime/src/constants.rs

Session and Era lengths

Currently: Epoch 10min, Era: 1h -> Shorter Periods were suitable for testnets.
Propose udpdate Session and Era lengths to match Kusama:

  • Epoch/Session = 1h

  • SessionsPerEra = 6 (unchanged)

  • EraLength = 6h

  • BondingDuration: 112 (eras) = 4 * 28 = 28 days // like polkadot
    Kusama has a much shorted bonding duration of 7 days.

  • SlashDeferDuration: 111

Elections:

  • OffchainRepeat:
    Currently 5,
    Adjust to be sensitive to unsigned phase length: UnsignedPhase::get() / 8

Update values to match Polkadot and Ksm

  • SignedMaxSubmissions: u32 = 16
  • SignedMaxRefunds: u32 = 16 / 4

Signed solutions

  • SignedDepositBase = deposit(2, 0) / / assuming deposit() method was updated in this proposal?

  • SignedDepositByte = deposit(0, 10) / 1024

  • SignedRewardBase = JOY / 10

  • BetterUnsignedThreshold = Perbill::from_rational(5u32, 10_000);

  • MaxElectingVoters
    Top nominators as voters:
    Currently 10_000
    Increase to 12_500 (MaxNominatorCount = 20,000) so a little more than 1/2
    Not clear what optimum value is, but on polkadot it is also ~ 1/2 MaxNominatorCount

pallet_authority_discovery,babe,grandpa Pallets

  • MaxAuthorities
    currently 100,
    Update to 100_000 as Kusama and Polkadot
    This really only needs to be larger than what we want the network to support, and should certainly be at least equal to what SetMaxValidatorCount council proposal can set for the staking pallet.

@Lezek123
Copy link
Contributor

Lezek123 commented Oct 6, 2022

Proposal: ~ 6 x the monthly cost of running a reference hardware machine ~ $400/month
so ~ $2500. With 100 validators and no nominators the minimum value at stake would be
$2,500,000.

$2,500 * 100 = $250,000, not $2,500,000

Side note: Is the reward curve tuned to reward validators sufficient amount
given the estimated monthly cost above?

Validators + nominators get a total of $1,800,000 per year (3%). If we assume we have 100 validators and nominators don't get anything, that's $,1500 per validator per month, so almost 4x the estimated monthly cost per validator. Of course in the real world some of this will go to nominators though.

  • SignedDepositBase = deposit(2, 0) / / assuming deposit() method was updated in this proposal?
  • SignedDepositByte = deposit(0, 10) / 1024
  • SignedRewardBase = JOY / 10

The deposit function was not updated, we use different functions now to calculate bloat bonds.
I think the role of SignedDepositBase however is more than simply protecting against state bloat. For example, according to this comment from the election-provider-multi-phase pallet, it can also be slashed:

/// A deposit is reserved and recorded for the solution. Based on the outcome, the solution
/// might be rewarded, slashed, or get all or a part of the deposit back.

@mnaamani
Copy link
Member

mnaamani commented Oct 10, 2022

  • SignedDepositBase = deposit(2, 0) / / assuming deposit() method was updated in this proposal?
  • SignedDepositByte = deposit(0, 10) / 1024
  • SignedRewardBase = JOY / 10

The deposit function was not updated, we use different functions now to calculate bloat bonds. I think the role of SignedDepositBase however is more than simply protecting against state bloat. For example, according to this comment from the election-provider-multi-phase pallet, it can also be slashed:

/// A deposit is reserved and recorded for the solution. Based on the outcome, the solution
/// might be rewarded, slashed, or get all or a part of the deposit back.

Good point. So looking at comments on polkadot runtime to get some ballpark figures:

	// 40 DOTs fixed deposit..
	pub const SignedDepositBase: Balance = deposit(2, 0);
	// 0.01 DOT per KB of solution data.
	pub const SignedDepositByte: Balance = deposit(0, 10) / 1024;
	// Each good submission will get 1 DOT as reward
	pub SignedRewardBase: Balance = 1 * UNITS;

How about something similar:

	// 40 Joys fixed deposit..
	pub const SignedDepositBase: Balance = BASE_UNIT_PER_JOY.saturating_mul(40);
	// 0.01 JOY per KB of solution data.
	pub const SignedDepositByte: Balance = BASE_UNIT_PER_JOY.saturating_div(100) / 1024;
	// Each good submission will get 1 JOY as reward
	pub SignedRewardBase: Balance = BASE_UNIT_PER_JOY;

@Lezek123
Copy link
Contributor

Lezek123 commented Oct 10, 2022

  1. We now use dollars / cents as units everywhere, shouldn't we follow this approach here?
  2. There is a very big difference in the price of JOY vs DOT. Even with Polkadot's very pessimistic assumptions about 1 DOT = $1, it still gives us 1 DOT = 16.666 JOY. Without understanding too much about the dynamic of these parameters it's difficult for me to tell if it's ok for the values to be so low.
  3. For SignedDepositByte I think we can just use MinimumBloatBondPerByte, I'm not sure why Polkadot doesn't just use deposit(0, 1). Maybe it's because their deposit/byte is very high by default and they only want to use lower values in some specific cases.

So I would propose something like:

	// $10 fixed deposit (4x lower than Polkadot/Kusama in terms of USD value estimate, same as minimum voting/stakingCandidate stake)
	pub const SignedDepositBase: Balance = dollars!(10);
	// standard deposit fee / byte
	pub const SignedDepositByte: Balance = MinimumBloatBondPerByte::get();
	// Each good submission will get $1 as a reward (same as Polkadot and 30x lower than Kusama in terms of USD value estimate)
	pub SignedRewardBase: Balance = dollars!(1);

@mnaamani
Copy link
Member

So I would propose something like:

	// $10 fixed deposit (4x lower than Polkadot/Kusama in terms of USD value estimate, same as minimum voting/stakingCandidate stake)
	pub const SignedDepositBase: Balance = dollars!(10);
	// standard deposit fee / byte
	pub const SignedDepositByte: Balance = MinimumBloatBondPerByte::get();
	// Each good submission will get $1 as a reward (same as Polkadot and 30x lower than Kusama in terms of USD value estimate)
	pub SignedRewardBase: Balance = dollars!(1);

👍

@bedeho
Copy link
Member Author

bedeho commented Nov 17, 2022

Done.

@bedeho bedeho closed this as completed Nov 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants