-
Notifications
You must be signed in to change notification settings - Fork 635
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can anyone please help me to explain 'meta-labelling' in the most understandable form ? #18
Comments
Hey, Assume you have a trading model to decide on the side of a bet, long or short. Usual thing is to open a position right away. Meta labeling is for making a decision on either opening or not opening that position. In practice, since you know the side of the bet and you have your labels, you already know if this is a profit or a loss (two classes 1,0). Hence, Meta labels are the PNL of the trading strategy. Using these two classes as the response variable and using your same features set as the input variable, you can train a binary classification model like logistic regression to decide on either opening or not opening those positions. You can also decide on the size of the bet using the probability of the prediction. |
Thanks you very much !! |
I notice that de Prado will add additional inputs, beyond simply side (i.e. moving average plus side are the inputs, and binary output) Does anybody have insight on which is preferred? Because intuitively, we'd want to add more and more inputs beyond side as a trading signal in live trading |
My understanding is that we want informative features in addition to the side during meta-labeling training to assist the model in discriminating between profitable and unprofitable signals. |
I had some similar questions. I Read all the papers in the bib and have built up somewhat of an intuition but still so many questions. I wrote the following notebook on Quantopian illustrating meta labeling but using MNIST data. https://www.quantopian.com/posts/meta-labeling-advances-in-financial-machine-learning-ch-3-pg-50 Later on in the bet sizing chapter de Prado expands a bit more on the relevance of meta labeling. |
Opened an issue regarding this in the Ch3 notebook. |
It isn't really explained very well theoretically. A few observations: it is basically a meta-probability model if you forget about bet-size and just focus on model B is predicting probability model A is correct given My suspicous is that when you include context data (X) you almost always are running out of data points if you do anything but classification on a binarized outcome. Quantile regression should help, I didn't see this discussed. It would be interesting to see some work with these different model objectives but on layers of an algo (a "network"). You could vary the weights on these to go from the fullly disconnected model A to a blend of model A caring about it's objective and the overall objective. Generally, adding extra objectives has a regularizing effect on these problems. This is likely the most intuitive way to think about if it actually makes sense. If you take the words literally in the book, he seems to be confounding bet-sizing withe predicting the distribution conditional on the side. For example, how would you use meta-labeling as Prado discribes it to combine the predictions of many models across many times and assets to make portfolio level decisions? |
No description provided.
The text was updated successfully, but these errors were encountered: