You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The team wants to predict LOL game wining result (binary classification) using game features from the game (e.g. first kill, etc).
Things I like:
detailed information about LOL game. Not being a game player myself, I got a very good sense of what the project is about from the introduction
you tried various models from the class and compared their misclassification rates.
The feature visualization looks great.
Some suggestions for improvements:
I found the rank of features from the model with quadratic loss function and quadratic regularizer misleading. I don't think you can draw conclusions directly by comparing the coefficients numerically. The features are not on the same scale so the coefficients don't mean anything and you didn't say anything about scaling in the report so I assumed you didn't do it?
Why are you only drawing conclusions about feature ranking in the linear models? I think in your case (no scaling with any of the features), the random forest or any tree classification is a better approach for feature selection. And the random forest has the lowest misclassification rate anyway.
I would like to see more analysis about your choice of models here. For example, why do you think the random forest had the best misclassification rate and what does that tell us about winning in LOL?
The text was updated successfully, but these errors were encountered:
The team wants to predict LOL game wining result (binary classification) using game features from the game (e.g. first kill, etc).
Things I like:
Some suggestions for improvements:
The text was updated successfully, but these errors were encountered: