-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First Proposal: Mainnet Parameters #4300
Comments
My notes after an initial read, I don't know much about staking so didn't really get into that deeply. Will rereview after a bit to let information settle and will take a deeper look at some other things Question: For some/all of these parameters to change, is a runtime upgrade required?
|
Infrastructure cost examples:
We may want to run this kind of infrastructure for the benefit of potential gateway operators, but given our use case I expect the costs involved may be higher. Note: for this kind of infrastructure it should be assumed that there may be initial costs involved with deployment/integration and they may be pivotal to ensuring initial platform growth. |
Market Cap and Resource Pricing
Other things to consider:Short sellingI think the assumption that primary payoff for an attacker is hedonic or reputational is extremely optimistic. In order to try to model the potential profit from such attack I picked a few cryptocurrencies with a market cap close to
As shown in the table, it takes on average a ~ My assumption is therefore: Given a current price of JOY Now to estimate how much a network congestion attack can affect the price of a cryptocurrency I've chosen a few known incidents that caused significant network distruptions:
Although it's hard to draw any definitive conclusion from this data, it appears as though attacking the network over a period of time longer than 1 day is not optimal from the perspective of the attacker, as the It also seems like the price impact of making a blockchain unusable for about a day can be as high as ~ With those assumptions we can calculate how much an attacker could potentially profit from a 1-day network congestion attack on a Joystream blockchain:
Of course this is almost a worst-case scenario, we're not taking into account the exchange fees and other costs associated with holding the short position, as well as whether the attacker would be willing to take the risk (as the effect on the price is uncertain). Given this analysis, I think it is reasonable however to make the cost of such attack at least Archival / validator node storage costsThis is very important factor and something we need to consider not only in context of an attack, but A day of full blocks (not counting operational extrinsics etc.) is: This is a problem not only because it increases storage requirements for validators, but also because it causes centralization and longer sync times. However, if we focus just on storage, 1 GB of data on Amazon (I'm assuming EBS volume, since it's unclear whether S3 will have sufficient IOPS) will cost at least Assuming we have a maximum of 100 validators, that's The cost increase may be even greater for Hydra Indexers / Squid Archives, since they store For validators the incentivization proposal is clear - they earn a fixed amount of JOY a year (3%) + some percentage of transaction fees. Transaction fees that get burned don't increase the value of validators yearly rewards directly, Therefore if we want the network to be suistainable, I think it's inevitible to give some percentage There are 2 parameters we can tweak to achieve that:
Having that in mind, I'll describe my proposal for both of those values later. Polkadot/Kusama valuesTx fee / byte Both Polkadot and Kusama use Based on calculations from the previous sections, our fee should be at least This is a 180x lower value than 10 milicents, which could be a sign that it is underestimated. Weight fee Polkadot's/Kusama's weight fee is Based on calculations from the previous sections, our fee should be at least This is ~8,5 lower value than 1.174 nanocents, which could be a sign that it is slightly underestimated. MarginWe need to take into account some margin, as our calculations were based on certain assumptions, like:
For example, if the market cap of JOY were to drop to As also shown in the previous subsection, the fees I estimated are still extermely low compared to the ones of Polkadot/Kusama. Having that in mind I suggest to take a safety margin by multiplying each fee at least 3-5 times to account for bad assumptions Summary: My proposalTaking all this into account I propose the following parameters:
Why is it ok for the byte fee to be 50x less than Polkadot's/Kusama's if weight fee is just 2x less?I think it all goes down to how many validators we want to have as described in Archival / validator node storage costs section. For example, Polkadot and Kusama want to maximalize the number of validators. Kusama currently has 1000 validators, while in my calculations I used the max of 100. Weight fee on the other hand shouldn't really depend that much on the number of validators, as the cost of running a validator doesn't increase proportionally to the weight of blocks being produced. Proposal parametersHere I'd like to specifically bring your attention to this setup:
Because the approval quorum is greater than the approval threshold, this means there's no point in ever voting "Reject" for a proposal with those parameters, since voting "Abstain" has much more power than voting "Reject" in this case (because "Abstain" votes don't count to the quorum). Assuming a council has 3 members, let consider this example:
Now let's consider a different example:
This will hold true regardless of the number of council members, if approval quorum is harder to reach than the threshold, My suggestion would be to therefore replace those parameters with:
Which makes the dynamic much clearer: It basically requires at least 2 Council & Elections & Proposals
I agree with @mochet that the number seems low. It makes council a very centralized body, very dependent on a single actor.
I think there are multiple issues to be considered w.r.t. longer council terms:
On the other hand I don't think there is that much cost associated with making council terms shorter. It seems reasonable to assume that competent council members with a great long-term vision will just get re-elected over and over. With time they will also gain the advantage of well established reputation in the community and it's very unlikely they'll become replaced unless they really mess something up. Therefore my suggestion would be to:
Budgets & Spending and InflationContent, Membership, Forum, HR, Marketing
I'm not sure I would put Content Working Group in the same bucket as the other ones mentioned here (membership, forum, HR, marketing). I think Content Working Group has an important role to play since the very beginning, it also has a lot of responsibility given its on-chain privileges. Things can easily go astray if we don't have enough responsive curators to watch over the platform and prevent it from being flooded with inappropriate content. Realistically I think there should be at least 5-10 curators being paid Storage and distribution
I think for the purpose of estimating the budget for those groups we need to:
For the purpose of my calculations I'll assume 10,000 DAUs each consuming about 10mins of content per day (as put in the original propsal) is the average daily activity we strive to reach in the first year of existence. Content uploaded vs content consumedI had a suspicion that the proposed budget for the storage group may overestimated, so to get a better perspective I collected some statistics about the existing video platforms, as summarized in the table below:
Based on those statistics I think the best upload / distribution ratio to use for my calculations will be
I think Twich & Peertube kind of confirm that theory, Peertube being a relatively new invention, while Twich being primarly focused on livestreaming, making the impact of acumulated content less pronounced. This means that if we assume 1,5 PB being consumed monthly on average during the first year of Joystream existence, we should assume about Calculating storage and distribution infrastructure costsNormally I would expect some kind of exponential growth when it comes to storage space required (due to increasing adoption), but even if we assume it happens linearly, so there is actually 30 TB of content upladed every day, with the replication factor of Here are a few things that I think were overlooked in the original proposal: Some of those calculations depend on the total number of storage / distribution nodes and workers. In my calculations I'm assuming there are
Strage and distribution budget: My proposalBased on my calculations I propose following budget parameters for storage and distribution working groups: Storage WG
Distributor WG
The key aspect of my proposal is that the budget for distributor working group is actually significantly higher than the one for storage working group, which I think is reasonable considering it's assumed to have much more workers. Creators & creator payouts
I think it general it is a good strategy to estimate budgets for different groups based on the set of assumption that were already made w.r.t. other groups, in this case storage and distribution group specifically. For example, we assumed that on average 1,5 PB of content is consumed each month during the first year. If we take and average length of video Youtube creators earn ~$5 per 1000 views (source), which With those assumptions realistically creator payouts could be as low as ~ Total monthly budget & inflationMy proposalUsing my proposed estimation of budget spendings, we get:
How sustainable is this?Ideally we would need to burn more or less the same amount of JOY that we mint or just slightly less in order for the price of JOY to be sustainable over time. We've made some assumptions about content comsumption and storage, one of which was that there is about ~ This means there are ~5,000 videos uploaded daily. Out of this 1,3 cent, 80% is burned, which gives us ~1 cent burned per video. With 5k videos uploaded daily that's just $50 of JOY burned per day for video upload inclusion fees. Another source of burning is membership fees. Other sources of burning include:
For the last 2 it's very hard to estimate the number of tokens burned, but with 2% platform fee we would need a volume of ~ That leaves us with data size fee as our last hope to counter high inflation. Ideally they should cover at least ~ How much should the data size fee be?My answer would be: As high as we can tolerate. It's not hard to calculate that we would need an average (200 MB) video upload to cost ~ My suggestion: Make data fee State bloat bond pricingThe cost of storing
|
Replying to first part now, just to unblock you and others. Will continue with next in the afternoon tomorrow or Thursday
In the end I would prefer to not go from $10K to $30K per day, as low fees are quite important for us, and short term (say next 1-2 years) risk of this issue seems low, but I'm not wedded to it!
I agree the market cap one is by far the riskiest assumptin here, both ways. There is no way outside of actually updating this fee with a runtime upgrade proposal when one sees market cap stabilize at some new order of magnitude very different to what was assumed, over some sustained period of time.
Please double check and make sure I'm summarising your proposal correctly in relatin to the original one!!
Realisically, I think byte fee is most important for our chain, as most content related actions may have a lot of metadat, but not that complex compute? I guess perhaps there is some heavy bucket related stuff when creating bags during channel creation that may conflict with that.
Well spotted, I think the issue here is most fundamentally that we have a system where it is too easy to miss how absentions are different, so much so that I made a mistake myself in prsuming that There is already an issue about changing this here, originally planned for post-launch, but perhaps we just fix this now? If we do this, are you fine with this specific parameter issue?
I think reducing period length is fine, but then I think constitutionality must be increased. The reason for the increase is that detection of and coordinating against hostile proposals requires time, possibly a long time if the change in question is complex and it's not easily conveyed why its bad, or how bad it is. I think the cost of having risky things take time is something we should accept for certain existential proposal types. Just keep in mind how long it takes to even put together a fully tested and well reasoned runtime upgrade, it has lots of built in latency to begin with, same is likely the case for other very risky changes. Having a shorter council period is however still great, as it increases our effective security by having larger number of occasions for challenging. One thing that does concern me is however the monitoring cost: imagine there was an election ever 24 hours? That would be excellent in terms of being able to challenge bad proposals, but it may also badly deplete the capacity to monitor effectively, as it just becomes too costly. How about
**I don't understand why you want to keep, let alone reduce, constitutionality in concert with shorter council, given your security concern? **
Sounds good.
Great analysis on the upload/distribution ratio, and also the very granular breakdown of resources. My main issue with this analysis is that it assume one worker runs one node. I don't believe this is an efficient way to organise this activity. Having many dusins, let alone hundreds, of people radically increases costs of coordination, planning, monitoring, and reduces room for specialization. This is part of what I was trying to governance section. Any idea of how many actual full time people would be needed if we assume
I think we cannot really rely on paying based on consumption data during the first year, this requires better data collection and authentication on distributor layer, and likely Orion v2/gateway node to allow gateway level user accounts. So alternative will be to just pay some sort of tiered rate based on uploads which are considered worthy. I think your final number is OK I think I thought we better to allow the DAO to provide substantially better terms during bootstrapping phase when first creators are being enticed to show up to an empty party, so to speak. What about cutting it down the middle? You decide.
The requirement of sustainability, which I take to basically mean covering costs, or profit=0 condition, is not really appropriate for the 1st year of a network system. I think YouTube itself is the best example. It is now 17 years old, and is thought to have around $300B - $500B in enterprise value, but it is still not actually clear whether its profitable after creator and infra costs, and if it is, this was very late in its story. Another reference point here is looking at quarterly revenue of various protocols, and here it is not even distinguished what portion of this capitalizes in the native asset vs. goes to service providers like liquidity providers or infra operators. Even when looking at the very depressed crypto capital markets, you can see that valuations of these assets, which is what determines their ability to fund expenses, is a very substantial multiple of these numbers, and many of these wil not be profitable, even Ethereum, despite EIP1559.
So I think we should not do this, and as a general point, when we look at where value should be captured, it should be places where people are consuming, and not producing positive spillovers, hence charging up fees on NFT/CRT features for consumers for example.
For these reasons I think we shuold stick to a small value here, at the very least to start out with. I don't really have a solid way of distinguishing between the merit of the original price I proposed and say going 10x, precisely because the problem is not really well posed. For non-adversarial users, this cost should be 0, as it's likely to grow very slowly, and so only the concern for attackers is of relevance, where I have no good attack model in mind. I would therefore just keep it as is. |
It's
I agree that it may be the case that increasing value of the platform will by itself account for those costs, I think the transaction fee cut however just accounts for them in the most straightforward way: validators get some revenue that is proportional to the size of the blocks that they need to store. It seems like a very clean solution with no obvious downsides, other than that if we decide to leave reward "curve" at
In this specific case we don't actually make anyone pay more, we just burn 95% of the transaction fee (which is required anyway to deter attacks) instead of 100%. So instead of all JOY holders being equally rewarded by the fact that a lot of transactions are happening (and therefore a lot of burns), we make the positive impact higher for validators, but very very slighly lower for other JOY holders (for example, 5% lower), which I think makes sense, as for validators more transactions = more costs, while holders don't need to worry about the chain size. But as I said, we can also slightly reduce validator's payments from the
I think x2 is ok (so ~0.2 nanocents / unit)
I was just pointing out that byte fee gets increasingly more important the more validators there are and the lower we want the requirements for running a validator to be and how that may explain why with our assumptions about 100 validators and no clear "max specs / validator" we ended up with a much lower number than Polkadot/Kusama.
I answered to the issue here: #4298 (comment). I still think those parameters are not good however. Even if we don't have
My reasoning was like this:
I think it is a good idea for workers to run more than 1 node, that's probably an idea I should've investigated more deeply in my review instead of just assuming 1-to-1 worker-to-node relationship. It seems like it would be more efficient in terms of costs also:
Given we can optimize storage/distribution budgets by reducing the number of workers I think we can increase it to ~
I see your perspective. I agree I was overly concerned about inflation, 0% is probably not a number we should strive for. >10% seems high, but it's not the end of the world either and is definitely more approperiate at the beginning. However, w.r.t. to the data fee, I think it should still be a number that will be at least high enough to incentivize a more optimized usage of our storage infrastructure and discourage spamming the storage system with 30 GB videos of white noise at almost no cost (for example). |
Summary of Parameter meeting: 29th SeptemberToday we had a meeting to discuss a subset of the current open topics, and bring some of the other group members up to speed on the topic for the purpose of their future reviews. We covered the following topics, which do not cover all current outstanding issues. What should the
|
I agree with this. I think in general the early days of launch period will be hectic as although we do have an enthusiastic community we will still be opening up paying roles to probably a large body of completely unknown people with zero reputation. I think I am heavily in favor of retaining the current 7 day council term period as a counter to this as it has demonstrated repeatedly through all testnets to result in the most activity. At a fundamental level the council's performance (and the DAO's "health" basically) is very much fulfilled by reliabilty with achieving accurate and timely reporting and if we start with the same highly consistent expectations it makes it much easier for the testnet to transition to mainnet. Reports have basically been the cornerstone of our DAO's health and if in the first term or two they are not being done properly and consistently then it should ring alarm bells... but if it takes 4-6 weeks to realize this then that is a problem. (You can of course require WGs to report more regularly than council term length, however, this means the two are no longer tied together which makes it quite a bit more difficult to effectively measure the council's performance and guage their understanding and decision making process. It makes a lot more sense for the rhythm of the WGs and the council to be tied together from the beginning and then consider extending council terms once things are running fluidly. This also makes setting initial budgets a lot simpler rather than having them being decided on sporadically and in a disjointed fashion.) With a long cycle, such as 2-3 weeks, I expect the time required for initial "learning behaviour" to not only be significant, but also for any required corrective behavior to take a long period of time. What also needs to be factored into this all is that the creation of proposals to announce/hire/fire/slash leads and also the proposals involved in monitoring reports and work consistency is significant. It may seem that it would be easy to replace an AFK, malicious or underperforming lead rapidly, but in reality this operation can take as long as Just "booting up" the DAO on mainnet will require somewhere in the region of 20+ approved proposals before a single lead is even hired (depending on whether all WGs are treated equally). This could take 2 or more weeks.Every single time we have restarted a testnet it has taken quite an amount of time to get things going at a basic capacity. Almost every single time we've replaced a lead it has taken longer than a week to get a replacement. Especially as the token value will be unknown, and since many actors may have zero reputation this time delay should be accounted for. In light of the above I would suggest:
On every single testnet, it has been critical to establish "tempo" (the regular filing of proposals and following of processes/deadlines) and it usually takes quite an amount of time before it fully works--having a very long council cycle would allow for too significant of a margin of error into things and it would be better to start with rapid fire council terms. On top of all of the above, when measuring the metrics of the platform as a whole, I think it will be far more effective, especially early on to measure the council and WG's performance very frequently (weekly, basically). With a council period of 2-3 weeks the measurements can still be done but they are far less effective in the crucial starting stages of the DAO. I think it is a very good idea to browse through past proposals starting from the last page to get an idea of how long it took to spin up things the last time the testnet relaunched: https://dao.joystream.org/#/proposals/past -- and this performance was done with quite an element of planning/coordination on JSG's side as well as the Also keep in mind that with
|
Feedback on FeedbackThis is my reply to #4300 (comment). I already partially replied here
Right, so value has to be constant, as per this, but you are saying it does not need to be a Rust constant, is that your point? If so, then yes it this may be better, even though it introduces the risk that someone inadvertently forgets to have this be constant. What I wonder about though is if any of these lower bounds on reclaim fees will actually ever be relevant, perhaps the primary cost will always be much higher in pratice. This is always the case in Polkadot, hence they dont even try to have some sort of code level guard for this. If this is the case for us also, then the complexity of introducing all of this may not be worth it, and certainly not in terms of timeline. How about this: we first stick ot the simple approach right now, look at how the fee bounds relate to the primary bloat bonds, and if they are same order of magnitude, then we consider introducing this smarter more automated way? Let me know what you think.
Great breakdown here, so to summarise
I believe that these are however not accounted for in the Polkadot/Kusama estimations used for bloat bonds though, correct? If so, is there no chance that these (1-3) contributions to the size are some how qualitatively different? My baseline assumption is no, but also strange why they would consistently get this wrong.
:thumbs_up:
Yes, seems like a fat finger here. So I was going by https://research.web3.foundation/en/latest/polkadot/overview/2-token-economics.html#npos-payments-and-inflation, Where
I did not see any input on whether
Fair enough, I am not totally sure that this is a problem directly? One may say $10K is just too little, it's effectively the cutoff for any FM induction so far, but the fact that you have a high ROI is not per say a problem for the system. I don't think you'd actually make it with that amount though, competition ensures that. What would be sad would be if people are unable to get in, despite being very competent and visionary, but they just don't have the capital, and markets for borrowing $JOY are not functional. I guess my question is: what problem are we solving for with this stake? Here are some candidates
So what do you want to do here?
Oh, I thought blocks were numbered from 0, in that case yes lets go with 1, I wanted it to run on the first block basically.
Yeah
As I alluded to prior, I think this def. should be the model, not hundreds of workers, however, this is not per say an argument for keeping parameter low. The primary argument for keeping it low is to make sure that, if somehow every lead unwittingly maxes their worker pipeline, that they don't exhaust, or even exceed, the block weight limit. Alternatively, just to crowd out lots of other transactions..
Right yes, that cascade effect is not durable, but is there an actual problem here? Each group gets paid at a fixed reasonably frequent interval, and noone group pays out on the same block. Does the nth payout gap matter directly?
So as we have covered, this should not be considered a revenue driver at present, only real case is to deter trivial congestion of buckets. That type of attack should be substantially less attractive than denial of service block congestion attack, as you are doing it one bucket at a time, effects are local to only those bags, and actions are reversible easily, even perhaps automated. Compounding this is that we don't know how big buckets will be, so it's really hard to say anything much. I would say something like
From the old values it would be
Yes I agree, but it just seems fundamentally wrong to allow slashing more than what someone staked, which currently is the case due to #4287, unless all proposals have their stake raised above this rejection fee, which then has the secondary effect of blocking newcomers from making proposals. So either a) we accept this, perhaps its just very unlikely that newcomers can make sensible proposals? Pick whatever you want.
Well spotted! Feels like we should change the meaning of this limit, its bad. The change is already covered by this much broader enhancement #3355, but seems severe enough that we should fix now. Added issue here: #4333 |
Feedback on 2nd FeedbackThis is intended as feedback to #4300 (comment).
So I summarised our meeting discussion on this here #4300 (comment) under Just to summarise my POV:
In the end, I am fine with going with 5% you are recommending , so long as we are sure the vesting leak issue is not at play.
I believe the conclusion to have threshold > quorum fixes this now?
I guess these have been discussed already.
Sounds good.
Sounds good. |
Kusama's value is 1 cent, which is x1000 higher
I wasn't suggesting to use the same value, just the same state size limit that it leads to. This means in our case (ie. assuming 60 mln market cap) it's just a few cents, not 1 dollar.
I think those are good arguments to keep it small, possibly make it smaller than my suggestion of 2 cents, however 1 milicent is an extremely low value. It's even lower than tx length fee, which we agreed to set to Without good benchmarks it's hard to tell what should be the correct value here, but if we were to go with anything substantially lower than Perhaps going with So how about this:
Yes
Based on the above suggestion about ExistentialDepsoit, it seems like the inclusion fees will always be relevant
They are definitely in the same order of magnitude, which suggests we should introduce the min. cleanup tx inclusion fee calculation mechanism.
Polkadot's / Kusama's bloat bonds model is very different. First of all, on those chains Then there is the
And here the cost of storing any new item is The Perhaps this is something we should consider also, ie. either we set the bloat bond for entities like channel based on the max. size of a channel object w/ all collections fully populated (since all of them have bounds) or we charge additional bloat bonds when new items are added to collections like
I don't think Provided that we're lowering the council term, I think
I think it matters, because the current design doesn't even ensure that different groups don't get paid in the same block, because if the payment for group A happens every I think it's a bad design, although it probably wouldn't cause anything critical to happen. The correct design would be to make the reward periods equal, but also make the first reward block different for each group (in that case the initial gap between groups would never increase)
Great, I was going to suggest a similar value in the end, so I think
I think my previous conclusion was that
I'm sure it's no longer an issue
Yes
From my POV council periods of 15 days and constitutionality of 8 is even worse than the intial proposal of I think it's especially bad because we'll be a new chain and I can see a need for many upgrades to be executed relatively quickly in the initial stages of mainnet (because of bad assumptions w.r.t. parameters / bugs that may be only discovered on a larger scale / functionality that will definitely need to be added etc.), but with values as constraining as those this would become pretty much impossible. My final suggestion would and the highest values I can imagine working would be:
|
Can you just list the tx cost values you get for some key transactions with these numbers? like you did last time? fine to just dump CLI screenshots here.
Ok. How can we test correctness? for example adding some sort of check in integration tests that checks wheather cleanup fees are infact less than state bloat bonds?
Fair enough.
I agree, first version is safer, but it does make me slightly nervous how we maintain this well. Any time anyone changes the
Sounds good! |
I think we can take advantage of pub PostDeposit: Balance = (key_size + forum::PostOf::<Runtime>::max_encoded_len()) * price_per_byte; |
In the current council, the period is 7 days. In the current reality, the council is immersed in reporting; councils have little time to think about any long-term priorities. A longer period would give the council more time to do this. |
Suggested updates to parameters related to validator staking, and elections. Staking palletGenesisConfig
|
Validators + nominators get a total of
The /// A deposit is reserved and recorded for the solution. Based on the outcome, the solution
/// might be rewarded, slashed, or get all or a part of the deposit back. |
Good point. So looking at comments on polkadot runtime to get some ballpark figures: // 40 DOTs fixed deposit..
pub const SignedDepositBase: Balance = deposit(2, 0);
// 0.01 DOT per KB of solution data.
pub const SignedDepositByte: Balance = deposit(0, 10) / 1024;
// Each good submission will get 1 DOT as reward
pub SignedRewardBase: Balance = 1 * UNITS; How about something similar: // 40 Joys fixed deposit..
pub const SignedDepositBase: Balance = BASE_UNIT_PER_JOY.saturating_mul(40);
// 0.01 JOY per KB of solution data.
pub const SignedDepositByte: Balance = BASE_UNIT_PER_JOY.saturating_div(100) / 1024;
// Each good submission will get 1 JOY as reward
pub SignedRewardBase: Balance = BASE_UNIT_PER_JOY; |
So I would propose something like: // $10 fixed deposit (4x lower than Polkadot/Kusama in terms of USD value estimate, same as minimum voting/stakingCandidate stake)
pub const SignedDepositBase: Balance = dollars!(10);
// standard deposit fee / byte
pub const SignedDepositByte: Balance = MinimumBloatBondPerByte::get();
// Each good submission will get $1 as a reward (same as Polkadot and 30x lower than Kusama in terms of USD value estimate)
pub SignedRewardBase: Balance = dollars!(1); |
👍 |
Done. |
Text was too long for Github issue, made Gist: https://gist.github.com/bedeho/1b231111596e25b215bc66f0bd0e7ccc
This proposal badly needs review for outright mistakes and omissions, keep discussion here, not in Gist.
The text was updated successfully, but these errors were encountered: