Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Henrik final challenge submission #14

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions Leveraging_RAI_DT_Challenges/final_challenge_hax.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
1.Can you tweak the error being used by the RAI controller? What happens if the integral error is simply the immediate error? Or if other integration rules are used? like Simpson's rule?
1. Errors are used to upgrade the redemption rate to calibrate the control variables to match the expected outcome in the best possible manner. It was mentioned in the lecture, that the proportional error is the difference between market price and state spote price, used in the kp controler term. Integral error is the accumulated lag between current state and the past state, which is used in the ki controller term. Both can be tweaked in the controllerparams script where they are defined. Any changes will naturally affect how the simulated output matches the expected output, as is the point in the PID controller. If the integral error is simply the immediate error and not the integral, then the controller is effectively just proportional and the path towards state spot price will likely become more volatile, as there is no past memory to dampen rate differences, once the target redemption rate is reached. Re Simpson's rule, I can google that it is quadratic, so presumingly having one more dimension of complexity than a PID controller, which would supposedly reduce error and volatility further, but increase model complexity significantly and thus require additional computational power compared to a PID controller, or actually in the case of RAI, PI, so 2x dimensional complexity vs current model, as the RAI controller is not currently using the Derivative term.
1. Right now, the RAI is being modelled as a PI controller. What is required in order to turn it to a PID one? Are you able to implement a demonstration?
- The Derivative term is managing how fast the error is growing or shrinking. Adding a Derivative term to the controller would likely reduce volatility even further as it would reduce over- and undershooting. It would require expansion of the controllerparams, but I am new to PID and not a math genius, so I am struggling with how to design it. As in the context of RAI it would have to be a prediction of the future error, I suppose it would mean an extrapolation of the error direction from the past state to the present state, which would mean that the derivative of the combined error vector should be added as a 5th parameter to the "def s_pid_error" controller in addition to the pid_param, pid_state, timedelta and new_error parameters. It should likely be an N-dimensional derivative vector of Ki-vector and Kp-vector, using nparray, but I am not good enough in python libraries yet to be precise on this, and implementation of such a controller is beyond my current python competency, sorry.
2. How would you introduce events into the past and future? How should someone proceed to include shocks into the extrapolation phase?
1. Events could be designed in their own version of the state_variables script as separate script to keep experiments separate from the main model parts. For this particular model I think the design suggests using the params.py script as the parametersweeps in the execution logic are set up to be either systemwide parameters, backtesting specific, extrapolation specific or misc parameters, so better here, for systemic shocks' impact in extrapolation likely as systemwide event in order to see the systemwide effect, even if it only for extrapolating purposes, but in order to sdo proper sanity checking also for the backtesting efforts.
3. What behavioural model would you introduce on the model? How it would modify the TokenState object? Are you able to implement it?
- One simple example would be to simulate the effect of an arbitrary liquidity incentive mechanism. This should modify the TokenState on the `rai_reserve` and `eth_reserve` simultaneously.
- When you say "behavioral models", I think there are 2 dimensions, one is implementation of incentive models to increase usage, the other is behavioral stress testing for risk management purposes. As we discussed in session 2 it would make a lot of sense to use the model in daily governance and risk management to assess if risks to the system are properly managed. For starters, I would consider behavioral models to support ongoing stress testing purposes, and I would be inspired by the many ongoing DeFi exploits across operastional, legal, governance, emerging, financial etc risk categories, examples are many in litterature and practice, one that springs to mind for RAI:
- A "run" on ETH / flash crash to simulate how it will affect reserves and surplus buffer. Perhaps a bit controversial, with a traditional lens, the RAI surplus buffer seems currently less than 2 pct of circulating RAI. In traditional banking one would normally require a surplus buffer of at least 3 pct for a sovereign currency and backed with cash or ultra liquid equivalents. Here RAI is only backed by crypto (ETH) and RAI should thus have possibly +5 pct of circulating RAI as buffer, but it is often difficult to get the token community to vote for an increase as obviously it will hurt their profits in the short run while the surplus buffer is being increased.
- The other dimension of behavioral models could be an incentive model to do exactly this. As I understand the whitepaper, the RAI team has thought cleverly about governance mechanisms to avoid different risks, but as I understand it, RAI governance is still subjecet to token-weighted voting. And in that respect one could perhaps challenge the effectiveness of this approach to achieve something, that is counter-intuitive to short term profit seeking. I am not proposing a solution to this problem,, but would encourage a conversation about it, inspiration as follows:
- In their excellent paper on https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3295811, Tsoukalas et al find that the effectiveness of token weighted voting platforms at harnessing the “wisdom” and “effort” of the crowd, generally discourages truthful voting, and erodes the platform’s predictive power unless users are “strategic enough” to unravel the underlying aggregation mechanism. Platform accuracy decreases with the number of truthful users and the dispersion in their token holdings, and in many cases, platforms would be better off with an unweighted “1/n” mechanism. When, prior to voting, strategic users can exert effort to endogenously improve their signals, users with more tokens generally exert more effort — a feature often touted in marketing materials as a core advantage of τ-weighting — however, this feature is not attributable to the mechanism itself, and more importantly, the ensuing equilibrium fails to achieve the first-best accuracy of a centralized platform. The optimality gap decreases as the distribution of tokens across users approaches a theoretical optimum, that the authors derive, but, tends to increase with the dispersion in users’ token holdings.